Episode Transcript
Brent Warner 0:00
What is the opposite of cheating, and how academic integrity in a time when it’s so easy to use AI, we’re discussing all of this with Patricia Bertram, Gallant, author of the book The opposite of cheating. This is the higher ed tech podcast season seven, Episode 12. You Tim,
Tim Van Norman 0:28
welcome to today’s higher ed tech Podcast. I’m Tim Van Norman, the Interim Assistant Director Technology Services at Irvine Valley College and Adjunct Professor of Business at Cypress College.
Brent Warner 0:39
And I’m Brent Warner, Professor of ESL here at IVC. We both enjoy integrating technology into the classroom, which is what this show is all about. Welcome.
Tim Van Norman 0:48
We’re glad you’re here with us. School started.
Brent Warner 0:51
We’re in, yeah, we got that early start on that semester. We’re already started with week week one. We’re into week two. At the point of this episode’s out. So hope everybody’s doing okay. And Tim, we have, we have another guest. We have two guests in a row, which is kind of unusual for us.
Tim Van Norman 1:07
Very unusual. Yes. So Tricia Bertram gallant is somebody that, first of all, I had met her co author at a conference last spring, and a week later, was at another conference and met both of them and Brent. You met her as the same time, so looking forward to this particular interview as the opposite of cheating.
Brent Warner 1:34
Yeah. So, Tricia, welcome to the show
Tricia Bertram Gallant 1:37
Thanks so much for having me.
Brent Warner 1:38
Yeah, we’re excited. So, uh, so last year at the AI show, we we, I actually had bought your book about a week before the AI show, and then I was I brought it down with me to San Diego, because I’m like, Okay, I’m going to read parts of this during my downtime and everything. And then all of a sudden, I turned a corner. Well, I don’t know how you turn a corner in that exact conference. You walk behind a black curtain, I guess. And then there you were in your like your session was going on exactly with it. So it was kind of a nice serendipity around it. So Tricia, before we get too far, just so that everybody knows. So your book, you’re the co author of the book, the opposite of cheating, and then you’re also the director of academic integrity, the academic integrity office and Triton Testing Center at UCSD, yeah, hard to get all those words in (laughter)
Tricia Bertram Gallant 2:28
I know I have it – I’m the director of AIO and TTC at UCSD, yeah, University of California, San Diego (laughter)
Brent Warner 2:37
That’s right. So, so when we saw you, correct me if I’m wrong. But you kind of, you went up on stage and you were talking, and I think the first thing that you said was the opposite of, I might be wrong on the exact phrasing, but you said, I think the opposite of cheating was, is not cheating, it’s learning. Can you start with just a broad overview of your argument and kind of what you’re talking about with that?
Tricia Bertram Gallant 2:59
Yeah, and shout out to David, my co author, because the opposite of cheating was his idea for the title. We had previously edited a book together just in 2022 and I, and I won the title debate on that book, and he won it on this book. So yeah, that’s really his, his saying the opposite of cheating is not just integrity. It’s learning, because integrity and learning are tied. If students presumably that the learning objectives are accurate and the assessments are accurate measures of those learning outcomes, then if students shortcut those assessments, if they misrepresent their their their knowledge and abilities. If they cheat on them, then they’re actually also not learning. So it interferes with the students learning. It interferes with the professor’s ability to validate that learning. And so that’s what we mean by the opposite of cheating. It’s trying to get faculty out of the head of policing and surveilling and really focusing on, how do I validate learning? How do I validate that this emerald student has the knowledge and skills that I’m certifying
Tim Van Norman 4:11
Nice – as as a teacher putting together my spring class, because at Cypress College where I teach. It starts February 2, so I’m in the process of putting it together. And this sounds really, really good. Like, like what I’m looking for, absolutely. Thank you. So you’ve often argued against the bad apple theory. Yeah, that cheating is about a lack of character in the community colleges. In the community college world students are often balancing 40 hour work weeks child care. We hear that students cheat to survive, to meet a deadline, etc. How do we address that as symptom, symptomatic, systemic, I’m sorry, and sometimes time. Management or resource or resource issue rather than a moral failing?
Tricia Bertram Gallant 5:06
Well, it could be both and or either. So Jason Stevens, a treasured colleague of ours, wrote a great change article a few years ago saying cheating is normal, natural, but not not inevitable, right? So, yes, humans cheat. If I asked you both Brent and Tim, if you’ve ever been dishonest, disrespectful, irresponsible, untrustworthy or unfair, I would hope that you would be honest and say that you have been all of those things.
Tim Van Norman 5:35
We probably would be lying and say no, right, right.
Tricia Bertram Gallant 5:38
And so yeah, for a lot of species, actually. And Jason covers this in his article, is cheating is a survival strategy, right for and if you think about the pandemic, and you think about Maslow’s hierarchy of needs at the bottom there, safety, security, safety, security, housing, whatever it is, we were all focused on that and and we all know that a lot of cheating happened during the pandemic, because, frankly, integrity and learning were the last thing. Self actualization were the last things on people’s minds during a pandemic. So I don’t want to steal moral agency from people, though, right? Like if we are thoughtful, if we are metacognitive and in thinking about our thinking, if we’re self reflective, if we’re conscious, if we’re mindful, then we can notice when there we are facing an ethical issue, and we can make different choices. So take the student who has childcare and so little time on their hands. Yeah, cheating is one option to resolve that, but so is asking for an extension. So is just talking to the professor about alternatives, right? So it’s not like cheating is the only survival mechanism. There are other ones. So the fact that they chose that one shows us that we’ve got a problem in our culture and in our institutions that one students are taught how to make better ethical decisions in times of stress and pressure. And two, we don’t invite it. And so, for example, for a long time I was, I was trying to convince faculty to put in their syllabi, like in hashtag integrity, integrity extension, right? Like, if you need an extension on something to not cheat, then please ask me for it. So yeah, there. I don’t want to take moral agency away from us as humans. I think that sets us apart from the machine. Ai, it sets us apart from other animals. And it’s not that simple of a solution. It’s not that simple of an explanation either. Students are humans. They’re complicated. They’re going to cheat for all sorts of reasons, and sometimes it may be for like, I’m just going to be a cheater. And most of the time, though, it’s because they’re not thinking about it as an ethical issue, and they’re choosing it for other reasons. Does that make sense? Absolutely.
Brent Warner 7:53
I think one of the things that I appreciate about the way you talk about all this stuff Tricia is that it’s not like, it’s not all one direction, it’s not all another direction, right? And it was that simple. I know it’s so easy for us to kind of hunt it like that, right? And say, like, oh, well, this is exactly what’s happening. And part of what you do in your work is kind of saying, Hold on a second. Let’s talk about the good and the bad mix together more like a, you know, it’s like the it’s more like a stew or a salad, right? That it is just like a separate pieces of a meal, right? And so I thought I find that a useful approach to be willing to say good things happen and bad things happen all at the same time. And, you know, let’s, let’s, let’s parse and talk about them and recognize that they’re not going to be separated fully.
Tricia Bertram Gallant 8:40
Yeah, and even the language good and bad things, right? Even categorizing them versus so if a student makes an academic error in your class, you don’t get all angry and mad. You might get a little disappointed that you think they should have, you know, tried harder, or maybe you get disappointed yourself that maybe you should you could have taught it better, but you don’t like get this moral outrage if they have an academic failure, right? But as soon as they have an ethical mistake or failure, oh, my gosh. Oh, I can’t believe students are cheating in my class. How horrible. How could they do this? For me, we take it so personally. And what I what David and I invite folks to do is keep your educator hat on. And it’s not I hate to I’m not trying to infantilize our students, but it’s not very different than keeping your parent hat on. And let me explain, if your child makes a mistake, you don’t stop caring about them, loving them, hoping they’re for the best of them, trying to help them do better. You might be disappointed. I always knew it was always worse to be to disappoint my parents than to make them angry, right? But you might be disappointed, but you’re not going to abandon them, you know. And so with our students, we’re just trying to encourage faculty to see an ethical mistake as equally a good learning opportunity as an academic mistake. And how do we best leverage that? Now, to be fair, instructors need a lot more support for. Their institution than than most institutions are giving. So it shouldn’t be up to every individual faculty member to try and leverage that ethical, that ethical mistake, for a teachable moment, the institution should be helping. But still, it would be great to have that mind, that that mind frame, yeah, yeah, absolutely.
Brent Warner 10:17
So on this show back several years ago, we talked about, you know, the inequities of online proctoring, especially during covid, right? These, all these things came up and and now it seems like those conversations are kind of revisiting or coming back up more because of AI. I’ve got some of my colleagues, they’re like, Okay, well, I’m putting Proctorio back on. I’m putting whatever back on, right? And so there’s this growing concern, though, also the high tech surveillance and, you know, proctoring software disproportionately flag students who don’t have private quiet homes or high speed Internet things like this. So, like, we’re kind of wondering about, how can community college leaders ensure that academic integrity doesn’t become some sort of tool for, you know, digital redlining, punishing poverty, you know, some of these issues that might come up that, in turn, you know, creates accusations against students, etc. Wondering if you have thoughts
Tricia Bertram Gallant 11:12
on that, absolutely not. I have no thoughts. I’m kidding. I have thoughts on everything to do with this topic. Okay, this is one of my favorite things to talk about right now, because it’s such a thorny issue. And I love thorny issues because I think we’re all very smart people, and we can figure it out, if only we talk about it. So thank you for asking that question. What I have noticed, and particularly like you said, It arose during the pandemic, was that institutions started pitting our values against one another and choosing one at the at the sacrifice of the other. So as you mentioned during the pandemic, it was academic integrity versus access, or academic integrity versus privacy, right? So access was the piece about we have to give remote classes. We have to give remote exams because of access issues. The privacy thing was, we don’t want to have third party proctoring companies surveilling students in their private bedrooms, or not their private bedrooms, with all these other people around, right? Fair enough, all three of those values are super important, or equity, integrity against equity, because not all students have equal bandwidth. Equal bandwidth. Blah, blah, blah, so all of those values are so important. But what we did during the pandemic, largely, was we threw integrity out. We said it, no, we’re not comfortable with this stuff, and so instead, I mean, we didn’t have time to find a solution, honestly. So I’m not blaming us. I’m I’m not trying to be a what do they call it? Quarterback, what’s that saying we you know, hindsight is 2020, Monday morning. Quarterback, the hindsight person. You know, it’s always easy, but what I’d like us to do is learn from that. And now it feels like we don’t have any time either, because AI developed so fast, and yet, it doesn’t feel as urgent as the pandemic did, because our our immediate safety, survival, isn’t at stake here. So what I’d like us to figure out is, how do we uphold both those values and they are they’re not in opposition, but I think they are sometimes in tension, and how do we uphold them both? Brushworth Kidder talks about this in his book as a trilemma, right? So when we see two values pitted against or we perceive two values pitted against, what each one each other, we we think we either have to choose one or the other, like it’s a dilemma. And what he said is, no, it’s it’s probably a trilemma when there’s a third solution that could uphold both values, maybe not perfectly, but better than throwing one out for the other. So we’re talking a lot about this here, and with AI, we are okay. Let me back up. We are in the job of facilitating and validating student learning, our associate degrees, our bachelor’s degrees, master’s PhDs, all our certifications that a student has reached a certain level of knowledge and skill sets, right? So we owe it to society to one verify that it’s the enrolled student who’s who’s completing the assessment, and that we can, we can know what AIDS, if any, they used in the completion of assessment. So we should not be graduating. I believe any student from a fully online, asynchronous degree where no assignments were supervised, that doesn’t mean surveilled. It could mean close observation. Think teach trainee teachers who have to go in the classroom and be be watched by an experienced teacher, think doctors, think nurses, think mechanics, think any like any trade or practical discipline where they are closely observed by someone more experienced to certify their knowledge, right? So we can figure this out. And so what this means is, if we have some, we all of our courses either have to have. A at least one secure assessment. Or we can do what the Australians are suggesting, which is milestone or benchmark. I’m going to call them benchmark testing, where, even if you’re enrolled in asynchronous program, that there are going to be benchmarks at which you have to show up for a supervised assessment. We’re going to tell you what those are ahead of time. You’ll have lots of time to plan. Maybe even we’ll say it’s this week, and you can choose the day and time that’s convenient for you. But we are going to have to think about that for at least for our foundational courses, because we we one have to validate learning, and also, I do think, and we won’t know this for a few years, that it’s imperative that our students develop some foundational knowledge before they we teach them how to offload to AI, like, you know, I want my doctors to memorize where my kidney is. I don’t want them asking chat GBT in the surgery room, like, where’s her kidney? What do I do now? Right? We still need people to have knowledge. The research that we have seen coming out is experts who are using Gen AI in the workplace are not experiencing much cognitive decline as a result, but novices are and our students, students are novices. So anyway, so it’s a long answer to your question, but I think we’re smart enough to figure out, how do we uphold integrity, access, equity and privacy in ways that are about close observation in an old fashioned way, apprenticeships, observations, presentations with Q and A, oral assessments, maybe a computer based assessment in an assessment center, because it’s a computer programming course or or something else. I think we can figure this out and but we have to remember that all these values are important and need to be upheld.
Brent Warner 16:46
I love it. Yeah, it’s funny. You know, you’re talking about the doctor thing, and I always think about that scene. I’m not sure if you remember from The Simpsons when Homer is about to go under and Dr Nick, the very suspicious doctor, like just as his eyes are closing. What the hell is that? Yeah, it’s like, yeah, that same thing. And I talked to my own students about that too. I say, Well, you know, it works really well when you have the expertise and you can, you can, you know, assess it and check it and kind of double check about your own ideas and your own things. And it doesn’t work so well when you don’t know anything about the topic, and then you’re just trusting blindly,
Tricia Bertram Gallant 17:21
yeah, if you just totally offload everything. So, yeah, we need to teach our students how to use it responsibly and ethically. And I think there are going to be courses or situations in which we are going to restrict use of it so that we can make sure they have that foundational knowledge. And look, it might not be secure. Assessments are very expensive or more resource intensive than non secure assessments. That’s why we do so many non secure assessments. So maybe it is like after this cluster of courses you’re going to come in and demonstrate your your abilities. Or maybe, you know it, every school will need to figure that out themselves, and maybe even every program and department will have to figure that out of where, when those occur within our major or our program.
Tim Van Norman 18:04
Excellent, yeah,
Tricia Bertram Gallant 18:05
Just sorry – One more thing to add to that, an important question for us to ask ourselves is, what value added are we bringing to an AI world? Students can learn everywhere. They can learn from Ai. They can learn from YouTube. They can learn from their world, their job, like a lot of community college students, as you said, are working 40 hours a week. Working 40 hours a week. They can, they can. They don’t need us for learning. They need us for labs. You know, they need us sometimes for resources. They need us sometimes for structure and because they’re not maybe going to be self directed or self regulated. They need us for a lot of things. But, but what do they need us to come to class with a computer and sit there and take notes? Do they need us for podcasted lectures, or do they need us for human to human learning experiences? And so I really think for those, I think there’s going to be a lot more colleges and universities closing across America over the next few years. It already started before AI and I think the ones that survive are going to be those who really double down on the human to human learning experiences, the experiential learning and the building of durable human skills. And I’m not sure that online asynchronous does that.
Brent Warner 19:14
I think about that too, and I have several friends who are quite big advocates for the asynchronous online thing, with good reason. But I also worry about this side of, you know, this side of it too, which is like, Okay, how do we, how do we align these different ideas? So it’s, it’s long conversations with people who are like minded, yes
Tim Van Norman 19:33
and and as an online, asynchronous teacher, I agree. I think that it can work very well in exactly the right situation, the right teacher, the right student, etc, but you get outside of that right situation, and I think people struggle, and they’re not they’re not getting what they need out of it. Yeah.
Tricia Bertram Gallant 19:59
Yeah, to be fair, in person courses require all of that as well. Like, again, if it’s an in person course, but you’re just sitting there lecturing at students, and they’re not engaging with each other. That’s not value added either. Absolutely, it’s about the one. I want us to separate the modality of learning from the modality of assessment as we’re thinking through these thorny challenges, and so not putting pitting online learning against in class learning, but online assessments, unsupervised assessments against supervised assessments. And also the I just forgot my second point, but anyways, that one was most important. I had two points,
Tim Van Norman 20:39
so you might think of it later, that’s good. So as as we’re looking at this in the era of AI, can you talk about ethical judgment as a trade skill? Yeah. And also, how can we as community college leaders, help the students to see themselves as scholars in progress, rather than just customers buying a certificate. I feel like a lot of times those two get equated. The ethical judgment is a trade skill and customers buying a certificate.
Tricia Bertram Gallant 21:14
So ethical judgment, I think I’m going to call it. I’m starting to call it ethical intelligence is one of those durable human skills that we need to help people develop. We have shied away from it for too long, and we it is a skill. It is something again, that we have, that the machines don’t, and it’s going to be so it’s been imperative, and we’ve ignored it, and we might be seeing effects of that in our society right now. So I think we need to double down on that as a durable human skill and develop it just like some other durable human skills, interpersonal communication, yada yada yada. I don’t think we should treat our students as budding scholars. The majority of them are not going to be scholars. The majority of them are going to be folks, professionals and citizens trying to live out there in a very complicated world. I think all of our students need to know how to critically read scientific literature and research so they can make good decisions, so they can astutely critically think about the output of these AI tools. But I don’t necessarily think they need to be scholars in the sense of reading a lot of academic articles and writing in academic way. So I do think we need to remind rethink what it is we’re trying to produce. If you want to say, out of our institutions, and I’m not sure scholars is, is the majority of the population interesting?
Brent Warner 22:39
So I do kind of, I see what you’re saying. So, like, there’s lots of people that you know have high ethical standards, and they’re not necessarily, you know, they’re out there. I don’t know. I mean, what’s the, what’s the stereotypical opposite image, right? Like hunting out there by yourselves in the woods or something like that, right? Where it’s, like, I could see that, right? I can see where you’re saying, hey, well, we’re still teaching you skill sets that you can use in any part of life, and not just, you know, having to be reading deep books or, you know, things that are kind of connected with the ivory tower.
Tricia Bertram Gallant 23:13
Yeah. And I don’t think I’m sorry to all the philosophers listening, but I don’t think we need to read Aristotle and and Kant and all these great thinkers to learn how to just think more ethically in today’s in today’s world. So, for example, we’ll get, we get more students these days saying, like, I’m fully against using AI because the environmental damage that it’s causing, perfect opportunity to critically think and ethically think. Does there, is there an environmental cost using AI absolutely is there also one to watching streaming Netflix for hours or streaming Tiktok videos for hours, absolutely right? And so helping and eating meat, driving your car, whatever. So, you know, these are the kinds of exercises, mental exercises that are meaningful to students so that might raise their intrinsic motivation to actually do the work, because they’re interested in the topics, rather than having to read a book and just talk about what Aristotle thought, but applying what Aristotle thought to these contemporary situations is much more meaningful, much more engaged. Might raise intrinsic motivation, therefore might reduce cheating, and encourage students to really apply this stuff to their own lives, which is really what we need them to do, because, again, our job is to help produce the next generation of citizens and professionals. There’s so much
Brent Warner 24:34
to talk about here. We’re gonna get we’re gonna have one more question. I talk a lot, oh, no, that’s what we wanted. And we’re just like, Oh, there’s more reasons to talk to you more later, but, but the last one I kind of want to get to, because, especially at the beginning of the semester, and the end of last semester, you know, I went around, I talked to different teachers, especially adjunct faculty, and, like, I just saw that, you know, that glazed over look like on their on their eyes and, and it’s one of the we’ve got teachers driving a. Around the freeway, flyers going to 234, different schools, and they don’t have the bandwidth for, you know, these intensive assignment redesigns. Like, how do I deal with AI, right? Like, what am I going to do? And they’re just like, they’re just tired, right? And I see that a lot of them, you know, kind of a lot of them seem to believe that they’re seeing AI in every assignment. And so what are some strategies that maybe a busy adjunct professor can implement without, you know, burning themselves out.
Tricia Bertram Gallant 25:27
Yeah, so, so one I hear you all faculty right now are being asked to rebuild the plane while they’re flying it, and it is simply unfair, and it is most unfair to the adjunct faculty. I have been one. I took three months to prep my team based learning, flipped classroom, and I started getting paid on the first day of class, right? So three months of unpaid labor, so I get it. And we and from the institutional point of view, from our government’s point of view, we got to stop doing that for publicly funded schools, we got to, we got to stop treating people like this and doing a lot of unpaid labor. Okay, so that’s my, my big my big point. Now you’re in that situation. You have no you know, you can’t control when you get paid and all that kind of thing. So first thing I would say is just look at your current assessments, if you haven’t played with them in Gen AI, you should, because it changes so rapidly that if you haven’t played with it since it came out, you’re you’re not really fully aware of what what it can do, especially with perplexity comment and other AI agents like that. So look at your assessments if Gen, if they’re unsupervised, students are going to use Gen AI tools on them, and therefore you have to decide, does that undermine their learning and or my ability to validate their learning? And if it does, then you’re then you’re going to have to either change the assessment or look at securing it. Securing it, not if you’re not in an online, asynchronous class, but let’s put that in a bracket for a second, securing it if you’re an in person or even a synchronous online professor, would be the easiest bridge option, rather than redesigning the whole assessment, right? And you don’t have to do that for every assessment in class. Look at, look at, you know, two or three supervised assessments and the rest can all be formative assessments to facilitate student learning. Have them disclose their AI use, have them think about their AI use. What did I talk to our students about if, how, when and why? You know if I should use it, when I used it, how I used it and why I used it. And have them reflect on that, so that you’re learning from their experience. They’re learning from their experience. I think that’s the easiest thing to do. And even backing up in order to determine that, ask your students. Listen to your students. Ask them, are you using these tools? Why are you using them? Hey, here’s the syllabus for our class. Do you think there’s any point in this, in these assessments, that you think you could use it ethically? You might learn something from them that you will say, Oh yeah, I’m actually fine with that, and implement that as as an allowed thing. So you really have to figure out, you might have to secure some assessments and move away from fully unsupervised assessments. The other problem is, or so, and if you, and if you can’t do that and you’re suspecting AI use is being used, then you might have to change your rubric. So if Gen AI can produce a C paper, then that’s now an F paper. And that’s kind of unfortunate, because maybe that C student C paper did actually come from a human, but C’s get degrees, and if Gen AI is just being used to pass through the course, we’re still certifying that student, right? So we’re still not validating that student learning. So that might be another thing that that could be more easily done, and you can use the tools like chat, GPT to do these things, to rethink your assignments, to rethink your rubric, to look at vulnerabilities of your assignment. So you can work with the tools to do that. Yeah, that.
Brent Warner 29:03
I love it. Okay, so there’s a lot to get started with here. So Tricia, as we wrap up here, so on our campus, we are going to be using your book for a book club this semester, and so we’re going to be working through those conversations, and then we are working with you to get you onto campus at the end of the semester to come and talk with everybody. Tim’s got his copy of the book ready. We got, we got our bulk order that came in. So anyone who’s listening, that’s an IVC, you can come and get the book. We already got it for you. So, so we’re, yeah, we’re happy to have you on the show first, and then as we move forward, kind of talking about these types of areas of the conversation, like, how can we deal with this? Because it’s not just a once and done conversation, it’s going to be a
Tricia Bertram Gallant 29:46
long term, yeah, and it’s so context dependent, that’s right,
Brent Warner 29:50
that’s right. And so each person in each department is going to start having to have their own conversations on how it works for them. And so, so we’re really excited to kind of. Of unfold that part of the conversation on our campus, too. If anybody here is listening, please join us. We would love to have you in together with us. And Tricia, how can people find you if they’re looking for you, if they’re not on our campus, if they can’t get a book from us?
Tricia Bertram Gallant 30:15
Super easy the opposite of cheating.com. And we have our own podcast. So if you’d like to hear from people who are thinking about this, talking about this, implementing ideas. You can check out that podcast as well, and you can get our contact information from there.
Brent Warner 30:30
Too. Awesome. Thank you so much.
Tim Van Norman 30:36
Thank you for listening today. For more information about this show, please visit our website at the higher ed tech podcast.com
Brent Warner 30:44
As always, we do want your feedback, so please go to the higher ed tech podcast.com and let us know your thoughts
Tim Van Norman 30:51
For everyone at IVC that’s listening. If you need help with technology questions, please contact IVC technical support. If you have questions about technology in your classroom, please stop by Library, 213, or contact me. Tim Van Norman AT T van norman@ivc.edu
Brent Warner 31:06
and if you want to reach out to me about the show, you can find me on LinkedIn at Brent G Warner.
Tim Van Norman 31:11
I’m Tim Van Norman
Brent Warner 31:13
and I’m Brent Warner, and we hope this episode has helped you on the road from possibility to actuality. Take care everybody.
What is the opposite of cheating, and how academic integrity in a time when it’s so easy to use AI, we’re discussing all of this with Patricia Bertram, Gallant, author of the book The opposite of cheating. This is the higher ed tech podcast season seven, Episode 12. You Tim,
Tim Van Norman 0:28
welcome to today’s higher ed tech Podcast. I’m Tim Van Norman, the Interim Assistant Director Technology Services at Irvine Valley College and Adjunct Professor of Business at Cypress College.
Brent Warner 0:39
And I’m Brent Warner, Professor of ESL here at IVC. We both enjoy integrating technology into the classroom, which is what this show is all about. Welcome.
Tim Van Norman 0:48
We’re glad you’re here with us. School started.
Brent Warner 0:51
We’re in, yeah, we got that early start on that semester. We’re already started with week week one. We’re into week two. At the point of this episode’s out. So hope everybody’s doing okay. And Tim, we have, we have another guest. We have two guests in a row, which is kind of unusual for us.
Tim Van Norman 1:07
Very unusual. Yes. So Tricia Bertram gallant is somebody that, first of all, I had met her co author at a conference last spring, and a week later, was at another conference and met both of them and Brent. You met her as the same time, so looking forward to this particular interview as the opposite of cheating.
Brent Warner 1:34
Yeah. So, Tricia, welcome to the show
Tricia Bertram Gallant 1:37
Thanks so much for having me.
Brent Warner 1:38
Yeah, we’re excited. So, uh, so last year at the AI show, we we, I actually had bought your book about a week before the AI show, and then I was I brought it down with me to San Diego, because I’m like, Okay, I’m going to read parts of this during my downtime and everything. And then all of a sudden, I turned a corner. Well, I don’t know how you turn a corner in that exact conference. You walk behind a black curtain, I guess. And then there you were in your like your session was going on exactly with it. So it was kind of a nice serendipity around it. So Tricia, before we get too far, just so that everybody knows. So your book, you’re the co author of the book, the opposite of cheating, and then you’re also the director of academic integrity, the academic integrity office and Triton Testing Center at UCSD, yeah, hard to get all those words in (laughter)
Tricia Bertram Gallant 2:28
I know I have it – I’m the director of AIO and TTC at UCSD, yeah, University of California, San Diego (laughter)
Brent Warner 2:37
That’s right. So, so when we saw you, correct me if I’m wrong. But you kind of, you went up on stage and you were talking, and I think the first thing that you said was the opposite of, I might be wrong on the exact phrasing, but you said, I think the opposite of cheating was, is not cheating, it’s learning. Can you start with just a broad overview of your argument and kind of what you’re talking about with that?
Tricia Bertram Gallant 2:59
Yeah, and shout out to David, my co author, because the opposite of cheating was his idea for the title. We had previously edited a book together just in 2022 and I, and I won the title debate on that book, and he won it on this book. So yeah, that’s really his, his saying the opposite of cheating is not just integrity. It’s learning, because integrity and learning are tied. If students presumably that the learning objectives are accurate and the assessments are accurate measures of those learning outcomes, then if students shortcut those assessments, if they misrepresent their their their knowledge and abilities. If they cheat on them, then they’re actually also not learning. So it interferes with the students learning. It interferes with the professor’s ability to validate that learning. And so that’s what we mean by the opposite of cheating. It’s trying to get faculty out of the head of policing and surveilling and really focusing on, how do I validate learning? How do I validate that this emerald student has the knowledge and skills that I’m certifying
Tim Van Norman 4:11
Nice – as as a teacher putting together my spring class, because at Cypress College where I teach. It starts February 2, so I’m in the process of putting it together. And this sounds really, really good. Like, like what I’m looking for, absolutely. Thank you. So you’ve often argued against the bad apple theory. Yeah, that cheating is about a lack of character in the community colleges. In the community college world students are often balancing 40 hour work weeks child care. We hear that students cheat to survive, to meet a deadline, etc. How do we address that as symptom, symptomatic, systemic, I’m sorry, and sometimes time. Management or resource or resource issue rather than a moral failing?
Tricia Bertram Gallant 5:06
Well, it could be both and or either. So Jason Stevens, a treasured colleague of ours, wrote a great change article a few years ago saying cheating is normal, natural, but not not inevitable, right? So, yes, humans cheat. If I asked you both Brent and Tim, if you’ve ever been dishonest, disrespectful, irresponsible, untrustworthy or unfair, I would hope that you would be honest and say that you have been all of those things.
Tim Van Norman 5:35
We probably would be lying and say no, right, right.
Tricia Bertram Gallant 5:38
And so yeah, for a lot of species, actually. And Jason covers this in his article, is cheating is a survival strategy, right for and if you think about the pandemic, and you think about Maslow’s hierarchy of needs at the bottom there, safety, security, safety, security, housing, whatever it is, we were all focused on that and and we all know that a lot of cheating happened during the pandemic, because, frankly, integrity and learning were the last thing. Self actualization were the last things on people’s minds during a pandemic. So I don’t want to steal moral agency from people, though, right? Like if we are thoughtful, if we are metacognitive and in thinking about our thinking, if we’re self reflective, if we’re conscious, if we’re mindful, then we can notice when there we are facing an ethical issue, and we can make different choices. So take the student who has childcare and so little time on their hands. Yeah, cheating is one option to resolve that, but so is asking for an extension. So is just talking to the professor about alternatives, right? So it’s not like cheating is the only survival mechanism. There are other ones. So the fact that they chose that one shows us that we’ve got a problem in our culture and in our institutions that one students are taught how to make better ethical decisions in times of stress and pressure. And two, we don’t invite it. And so, for example, for a long time I was, I was trying to convince faculty to put in their syllabi, like in hashtag integrity, integrity extension, right? Like, if you need an extension on something to not cheat, then please ask me for it. So yeah, there. I don’t want to take moral agency away from us as humans. I think that sets us apart from the machine. Ai, it sets us apart from other animals. And it’s not that simple of a solution. It’s not that simple of an explanation either. Students are humans. They’re complicated. They’re going to cheat for all sorts of reasons, and sometimes it may be for like, I’m just going to be a cheater. And most of the time, though, it’s because they’re not thinking about it as an ethical issue, and they’re choosing it for other reasons. Does that make sense? Absolutely.
Brent Warner 7:53
I think one of the things that I appreciate about the way you talk about all this stuff Tricia is that it’s not like, it’s not all one direction, it’s not all another direction, right? And it was that simple. I know it’s so easy for us to kind of hunt it like that, right? And say, like, oh, well, this is exactly what’s happening. And part of what you do in your work is kind of saying, Hold on a second. Let’s talk about the good and the bad mix together more like a, you know, it’s like the it’s more like a stew or a salad, right? That it is just like a separate pieces of a meal, right? And so I thought I find that a useful approach to be willing to say good things happen and bad things happen all at the same time. And, you know, let’s, let’s, let’s parse and talk about them and recognize that they’re not going to be separated fully.
Tricia Bertram Gallant 8:40
Yeah, and even the language good and bad things, right? Even categorizing them versus so if a student makes an academic error in your class, you don’t get all angry and mad. You might get a little disappointed that you think they should have, you know, tried harder, or maybe you get disappointed yourself that maybe you should you could have taught it better, but you don’t like get this moral outrage if they have an academic failure, right? But as soon as they have an ethical mistake or failure, oh, my gosh. Oh, I can’t believe students are cheating in my class. How horrible. How could they do this? For me, we take it so personally. And what I what David and I invite folks to do is keep your educator hat on. And it’s not I hate to I’m not trying to infantilize our students, but it’s not very different than keeping your parent hat on. And let me explain, if your child makes a mistake, you don’t stop caring about them, loving them, hoping they’re for the best of them, trying to help them do better. You might be disappointed. I always knew it was always worse to be to disappoint my parents than to make them angry, right? But you might be disappointed, but you’re not going to abandon them, you know. And so with our students, we’re just trying to encourage faculty to see an ethical mistake as equally a good learning opportunity as an academic mistake. And how do we best leverage that? Now, to be fair, instructors need a lot more support for. Their institution than than most institutions are giving. So it shouldn’t be up to every individual faculty member to try and leverage that ethical, that ethical mistake, for a teachable moment, the institution should be helping. But still, it would be great to have that mind, that that mind frame, yeah, yeah, absolutely.
Brent Warner 10:17
So on this show back several years ago, we talked about, you know, the inequities of online proctoring, especially during covid, right? These, all these things came up and and now it seems like those conversations are kind of revisiting or coming back up more because of AI. I’ve got some of my colleagues, they’re like, Okay, well, I’m putting Proctorio back on. I’m putting whatever back on, right? And so there’s this growing concern, though, also the high tech surveillance and, you know, proctoring software disproportionately flag students who don’t have private quiet homes or high speed Internet things like this. So, like, we’re kind of wondering about, how can community college leaders ensure that academic integrity doesn’t become some sort of tool for, you know, digital redlining, punishing poverty, you know, some of these issues that might come up that, in turn, you know, creates accusations against students, etc. Wondering if you have thoughts
Tricia Bertram Gallant 11:12
on that, absolutely not. I have no thoughts. I’m kidding. I have thoughts on everything to do with this topic. Okay, this is one of my favorite things to talk about right now, because it’s such a thorny issue. And I love thorny issues because I think we’re all very smart people, and we can figure it out, if only we talk about it. So thank you for asking that question. What I have noticed, and particularly like you said, It arose during the pandemic, was that institutions started pitting our values against one another and choosing one at the at the sacrifice of the other. So as you mentioned during the pandemic, it was academic integrity versus access, or academic integrity versus privacy, right? So access was the piece about we have to give remote classes. We have to give remote exams because of access issues. The privacy thing was, we don’t want to have third party proctoring companies surveilling students in their private bedrooms, or not their private bedrooms, with all these other people around, right? Fair enough, all three of those values are super important, or equity, integrity against equity, because not all students have equal bandwidth. Equal bandwidth. Blah, blah, blah, so all of those values are so important. But what we did during the pandemic, largely, was we threw integrity out. We said it, no, we’re not comfortable with this stuff, and so instead, I mean, we didn’t have time to find a solution, honestly. So I’m not blaming us. I’m I’m not trying to be a what do they call it? Quarterback, what’s that saying we you know, hindsight is 2020, Monday morning. Quarterback, the hindsight person. You know, it’s always easy, but what I’d like us to do is learn from that. And now it feels like we don’t have any time either, because AI developed so fast, and yet, it doesn’t feel as urgent as the pandemic did, because our our immediate safety, survival, isn’t at stake here. So what I’d like us to figure out is, how do we uphold both those values and they are they’re not in opposition, but I think they are sometimes in tension, and how do we uphold them both? Brushworth Kidder talks about this in his book as a trilemma, right? So when we see two values pitted against or we perceive two values pitted against, what each one each other, we we think we either have to choose one or the other, like it’s a dilemma. And what he said is, no, it’s it’s probably a trilemma when there’s a third solution that could uphold both values, maybe not perfectly, but better than throwing one out for the other. So we’re talking a lot about this here, and with AI, we are okay. Let me back up. We are in the job of facilitating and validating student learning, our associate degrees, our bachelor’s degrees, master’s PhDs, all our certifications that a student has reached a certain level of knowledge and skill sets, right? So we owe it to society to one verify that it’s the enrolled student who’s who’s completing the assessment, and that we can, we can know what AIDS, if any, they used in the completion of assessment. So we should not be graduating. I believe any student from a fully online, asynchronous degree where no assignments were supervised, that doesn’t mean surveilled. It could mean close observation. Think teach trainee teachers who have to go in the classroom and be be watched by an experienced teacher, think doctors, think nurses, think mechanics, think any like any trade or practical discipline where they are closely observed by someone more experienced to certify their knowledge, right? So we can figure this out. And so what this means is, if we have some, we all of our courses either have to have. A at least one secure assessment. Or we can do what the Australians are suggesting, which is milestone or benchmark. I’m going to call them benchmark testing, where, even if you’re enrolled in asynchronous program, that there are going to be benchmarks at which you have to show up for a supervised assessment. We’re going to tell you what those are ahead of time. You’ll have lots of time to plan. Maybe even we’ll say it’s this week, and you can choose the day and time that’s convenient for you. But we are going to have to think about that for at least for our foundational courses, because we we one have to validate learning, and also, I do think, and we won’t know this for a few years, that it’s imperative that our students develop some foundational knowledge before they we teach them how to offload to AI, like, you know, I want my doctors to memorize where my kidney is. I don’t want them asking chat GBT in the surgery room, like, where’s her kidney? What do I do now? Right? We still need people to have knowledge. The research that we have seen coming out is experts who are using Gen AI in the workplace are not experiencing much cognitive decline as a result, but novices are and our students, students are novices. So anyway, so it’s a long answer to your question, but I think we’re smart enough to figure out, how do we uphold integrity, access, equity and privacy in ways that are about close observation in an old fashioned way, apprenticeships, observations, presentations with Q and A, oral assessments, maybe a computer based assessment in an assessment center, because it’s a computer programming course or or something else. I think we can figure this out and but we have to remember that all these values are important and need to be upheld.
Brent Warner 16:46
I love it. Yeah, it’s funny. You know, you’re talking about the doctor thing, and I always think about that scene. I’m not sure if you remember from The Simpsons when Homer is about to go under and Dr Nick, the very suspicious doctor, like just as his eyes are closing. What the hell is that? Yeah, it’s like, yeah, that same thing. And I talked to my own students about that too. I say, Well, you know, it works really well when you have the expertise and you can, you can, you know, assess it and check it and kind of double check about your own ideas and your own things. And it doesn’t work so well when you don’t know anything about the topic, and then you’re just trusting blindly,
Tricia Bertram Gallant 17:21
yeah, if you just totally offload everything. So, yeah, we need to teach our students how to use it responsibly and ethically. And I think there are going to be courses or situations in which we are going to restrict use of it so that we can make sure they have that foundational knowledge. And look, it might not be secure. Assessments are very expensive or more resource intensive than non secure assessments. That’s why we do so many non secure assessments. So maybe it is like after this cluster of courses you’re going to come in and demonstrate your your abilities. Or maybe, you know it, every school will need to figure that out themselves, and maybe even every program and department will have to figure that out of where, when those occur within our major or our program.
Tim Van Norman 18:04
Excellent, yeah,
Tricia Bertram Gallant 18:05
Just sorry – One more thing to add to that, an important question for us to ask ourselves is, what value added are we bringing to an AI world? Students can learn everywhere. They can learn from Ai. They can learn from YouTube. They can learn from their world, their job, like a lot of community college students, as you said, are working 40 hours a week. Working 40 hours a week. They can, they can. They don’t need us for learning. They need us for labs. You know, they need us sometimes for resources. They need us sometimes for structure and because they’re not maybe going to be self directed or self regulated. They need us for a lot of things. But, but what do they need us to come to class with a computer and sit there and take notes? Do they need us for podcasted lectures, or do they need us for human to human learning experiences? And so I really think for those, I think there’s going to be a lot more colleges and universities closing across America over the next few years. It already started before AI and I think the ones that survive are going to be those who really double down on the human to human learning experiences, the experiential learning and the building of durable human skills. And I’m not sure that online asynchronous does that.
Brent Warner 19:14
I think about that too, and I have several friends who are quite big advocates for the asynchronous online thing, with good reason. But I also worry about this side of, you know, this side of it too, which is like, Okay, how do we, how do we align these different ideas? So it’s, it’s long conversations with people who are like minded, yes
Tim Van Norman 19:33
and and as an online, asynchronous teacher, I agree. I think that it can work very well in exactly the right situation, the right teacher, the right student, etc, but you get outside of that right situation, and I think people struggle, and they’re not they’re not getting what they need out of it. Yeah.
Tricia Bertram Gallant 19:59
Yeah, to be fair, in person courses require all of that as well. Like, again, if it’s an in person course, but you’re just sitting there lecturing at students, and they’re not engaging with each other. That’s not value added either. Absolutely, it’s about the one. I want us to separate the modality of learning from the modality of assessment as we’re thinking through these thorny challenges, and so not putting pitting online learning against in class learning, but online assessments, unsupervised assessments against supervised assessments. And also the I just forgot my second point, but anyways, that one was most important. I had two points,
Tim Van Norman 20:39
so you might think of it later, that’s good. So as as we’re looking at this in the era of AI, can you talk about ethical judgment as a trade skill? Yeah. And also, how can we as community college leaders, help the students to see themselves as scholars in progress, rather than just customers buying a certificate. I feel like a lot of times those two get equated. The ethical judgment is a trade skill and customers buying a certificate.
Tricia Bertram Gallant 21:14
So ethical judgment, I think I’m going to call it. I’m starting to call it ethical intelligence is one of those durable human skills that we need to help people develop. We have shied away from it for too long, and we it is a skill. It is something again, that we have, that the machines don’t, and it’s going to be so it’s been imperative, and we’ve ignored it, and we might be seeing effects of that in our society right now. So I think we need to double down on that as a durable human skill and develop it just like some other durable human skills, interpersonal communication, yada yada yada. I don’t think we should treat our students as budding scholars. The majority of them are not going to be scholars. The majority of them are going to be folks, professionals and citizens trying to live out there in a very complicated world. I think all of our students need to know how to critically read scientific literature and research so they can make good decisions, so they can astutely critically think about the output of these AI tools. But I don’t necessarily think they need to be scholars in the sense of reading a lot of academic articles and writing in academic way. So I do think we need to remind rethink what it is we’re trying to produce. If you want to say, out of our institutions, and I’m not sure scholars is, is the majority of the population interesting?
Brent Warner 22:39
So I do kind of, I see what you’re saying. So, like, there’s lots of people that you know have high ethical standards, and they’re not necessarily, you know, they’re out there. I don’t know. I mean, what’s the, what’s the stereotypical opposite image, right? Like hunting out there by yourselves in the woods or something like that, right? Where it’s, like, I could see that, right? I can see where you’re saying, hey, well, we’re still teaching you skill sets that you can use in any part of life, and not just, you know, having to be reading deep books or, you know, things that are kind of connected with the ivory tower.
Tricia Bertram Gallant 23:13
Yeah. And I don’t think I’m sorry to all the philosophers listening, but I don’t think we need to read Aristotle and and Kant and all these great thinkers to learn how to just think more ethically in today’s in today’s world. So, for example, we’ll get, we get more students these days saying, like, I’m fully against using AI because the environmental damage that it’s causing, perfect opportunity to critically think and ethically think. Does there, is there an environmental cost using AI absolutely is there also one to watching streaming Netflix for hours or streaming Tiktok videos for hours, absolutely right? And so helping and eating meat, driving your car, whatever. So, you know, these are the kinds of exercises, mental exercises that are meaningful to students so that might raise their intrinsic motivation to actually do the work, because they’re interested in the topics, rather than having to read a book and just talk about what Aristotle thought, but applying what Aristotle thought to these contemporary situations is much more meaningful, much more engaged. Might raise intrinsic motivation, therefore might reduce cheating, and encourage students to really apply this stuff to their own lives, which is really what we need them to do, because, again, our job is to help produce the next generation of citizens and professionals. There’s so much
Brent Warner 24:34
to talk about here. We’re gonna get we’re gonna have one more question. I talk a lot, oh, no, that’s what we wanted. And we’re just like, Oh, there’s more reasons to talk to you more later, but, but the last one I kind of want to get to, because, especially at the beginning of the semester, and the end of last semester, you know, I went around, I talked to different teachers, especially adjunct faculty, and, like, I just saw that, you know, that glazed over look like on their on their eyes and, and it’s one of the we’ve got teachers driving a. Around the freeway, flyers going to 234, different schools, and they don’t have the bandwidth for, you know, these intensive assignment redesigns. Like, how do I deal with AI, right? Like, what am I going to do? And they’re just like, they’re just tired, right? And I see that a lot of them, you know, kind of a lot of them seem to believe that they’re seeing AI in every assignment. And so what are some strategies that maybe a busy adjunct professor can implement without, you know, burning themselves out.
Tricia Bertram Gallant 25:27
Yeah, so, so one I hear you all faculty right now are being asked to rebuild the plane while they’re flying it, and it is simply unfair, and it is most unfair to the adjunct faculty. I have been one. I took three months to prep my team based learning, flipped classroom, and I started getting paid on the first day of class, right? So three months of unpaid labor, so I get it. And we and from the institutional point of view, from our government’s point of view, we got to stop doing that for publicly funded schools, we got to, we got to stop treating people like this and doing a lot of unpaid labor. Okay, so that’s my, my big my big point. Now you’re in that situation. You have no you know, you can’t control when you get paid and all that kind of thing. So first thing I would say is just look at your current assessments, if you haven’t played with them in Gen AI, you should, because it changes so rapidly that if you haven’t played with it since it came out, you’re you’re not really fully aware of what what it can do, especially with perplexity comment and other AI agents like that. So look at your assessments if Gen, if they’re unsupervised, students are going to use Gen AI tools on them, and therefore you have to decide, does that undermine their learning and or my ability to validate their learning? And if it does, then you’re then you’re going to have to either change the assessment or look at securing it. Securing it, not if you’re not in an online, asynchronous class, but let’s put that in a bracket for a second, securing it if you’re an in person or even a synchronous online professor, would be the easiest bridge option, rather than redesigning the whole assessment, right? And you don’t have to do that for every assessment in class. Look at, look at, you know, two or three supervised assessments and the rest can all be formative assessments to facilitate student learning. Have them disclose their AI use, have them think about their AI use. What did I talk to our students about if, how, when and why? You know if I should use it, when I used it, how I used it and why I used it. And have them reflect on that, so that you’re learning from their experience. They’re learning from their experience. I think that’s the easiest thing to do. And even backing up in order to determine that, ask your students. Listen to your students. Ask them, are you using these tools? Why are you using them? Hey, here’s the syllabus for our class. Do you think there’s any point in this, in these assessments, that you think you could use it ethically? You might learn something from them that you will say, Oh yeah, I’m actually fine with that, and implement that as as an allowed thing. So you really have to figure out, you might have to secure some assessments and move away from fully unsupervised assessments. The other problem is, or so, and if you, and if you can’t do that and you’re suspecting AI use is being used, then you might have to change your rubric. So if Gen AI can produce a C paper, then that’s now an F paper. And that’s kind of unfortunate, because maybe that C student C paper did actually come from a human, but C’s get degrees, and if Gen AI is just being used to pass through the course, we’re still certifying that student, right? So we’re still not validating that student learning. So that might be another thing that that could be more easily done, and you can use the tools like chat, GPT to do these things, to rethink your assignments, to rethink your rubric, to look at vulnerabilities of your assignment. So you can work with the tools to do that. Yeah, that.
Brent Warner 29:03
I love it. Okay, so there’s a lot to get started with here. So Tricia, as we wrap up here, so on our campus, we are going to be using your book for a book club this semester, and so we’re going to be working through those conversations, and then we are working with you to get you onto campus at the end of the semester to come and talk with everybody. Tim’s got his copy of the book ready. We got, we got our bulk order that came in. So anyone who’s listening, that’s an IVC, you can come and get the book. We already got it for you. So, so we’re, yeah, we’re happy to have you on the show first, and then as we move forward, kind of talking about these types of areas of the conversation, like, how can we deal with this? Because it’s not just a once and done conversation, it’s going to be a
Tricia Bertram Gallant 29:46
long term, yeah, and it’s so context dependent, that’s right,
Brent Warner 29:50
that’s right. And so each person in each department is going to start having to have their own conversations on how it works for them. And so, so we’re really excited to kind of. Of unfold that part of the conversation on our campus, too. If anybody here is listening, please join us. We would love to have you in together with us. And Tricia, how can people find you if they’re looking for you, if they’re not on our campus, if they can’t get a book from us?
Tricia Bertram Gallant 30:15
Super easy the opposite of cheating.com. And we have our own podcast. So if you’d like to hear from people who are thinking about this, talking about this, implementing ideas. You can check out that podcast as well, and you can get our contact information from there.
Brent Warner 30:30
Too. Awesome. Thank you so much.
Tim Van Norman 30:36
Thank you for listening today. For more information about this show, please visit our website at the higher ed tech podcast.com
Brent Warner 30:44
As always, we do want your feedback, so please go to the higher ed tech podcast.com and let us know your thoughts
Tim Van Norman 30:51
For everyone at IVC that’s listening. If you need help with technology questions, please contact IVC technical support. If you have questions about technology in your classroom, please stop by Library, 213, or contact me. Tim Van Norman AT T van norman@ivc.edu
Brent Warner 31:06
and if you want to reach out to me about the show, you can find me on LinkedIn at Brent G Warner.
Tim Van Norman 31:11
I’m Tim Van Norman
Brent Warner 31:13
and I’m Brent Warner, and we hope this episode has helped you on the road from possibility to actuality. Take care everybody.
Tim and Brent sit down with Dr. Tricia Bertram Gallant to talk about ideas around Academic Integrity as it goes through a new round of concerns and frustrations in the age of AI. Listen in as Dr. Bertram Gallant shares philosophical and practical approaches to dealing with the problems we’re all facing.
For those at IVC, this episode serves as a launching point for our AI Task Force Spring 2026 book club as we will read The Opposite of Cheating and Dr. Bertram Gallant will join us at the end of the semester on campus.
Resources
- The Opposite of Cheating Website
- Get the book: The Opposite of Cheating
- Dr. Bertram Gallant on LinkedIn
