Listen: President Robinson on LCC in the new age of generative AI

LCC President Dr. Robinson and Lookout Staff Reporter Carson Lemon smile for the camera following their interview. Photo by Sarah Hamilton.
This article is part of The Lookout's LCC x AI series, a multi-part series on conversations surrounding AI and its impacts at Lansing Community College.
By Carson Lemon
Staff Reporter
Special thanks to the team at LCC Connect and especially Daedalian Lowry for their assistance in producing this article.
As part of The Lookout’s capsule of articles studying the impact of generative AI at LCC, LCC President Dr. Steve Robinson sat with LCC Student and Lookout Staff Reporter Carson Lemon. In the interview, Robinson explores the ethical dilemmas presented by generative AI, recounts his experience teaching composition at ‘ground zero for AI’, and more. Listen to the president’s experiences, opinions, and interactions with faculty, staff, and more in the interview below.
Transcript
[0:00]
Lemon: Thank you again for sitting down with me. I very much appreciate you taking time out of your day to do this.
Robinson: Oh, I've been looking forward to it, Carson. Thanks for asking.
Lemon: Yeah, of course! I guess we'll jump right in. My first question for you today is, what was your first introduction to generative AI and what was your first impression?
Robinson: So, this is an interesting question, and I have a really specific answer. It's getting to be more than three years ago. All the conferences I go to as a college president for 2021 to 2022, up to the beginning of 2023, gen AI was a big topic at all these conferences, and most of what I was hearing was sort of the same thing, repeating itself. There's a lot of hype.
So, in April of 2023, I decided, well, I ought to get out and take a look at this technology. This is going to be more than three years ago. And I had a long writing project. I thought, you know what? This might be a good opportunity for me to explore whether an LLM like ChatGPT could help me with a project. I end up having this absolutely crazy interaction that I wrote an article about. I brought a copy for you.
Lemon: Oh, thank you so much!
Robinson: And it's published, it's published on my LinkedIn page. It's called My Incredible Interaction with ChatGPT. And I meant the word “incredible” very literally. It meant “not believable.” You know, enough time has gone by that I can do a spoiler alert on the article. The chatbot claimed to do all these things that not only couldn't it do, but—and you can't really say that a machine lied to you—it was putting words on the screen claiming that things were happening that absolutely were not happening. And it was a funny learning experience for me.
I gave a talk about it at a conference, and I actually was on a panel with some of our LCC faculty. The ending of the article, Carson, is kind of funny because the article was about half finished. When I sat down with folks from our writing center and our writing program here at LCC, the then-director of the writing program came up with an idea. I said, "I don't even know how I'm going to finish this article. I don't know what my conclusion is. I don't know what I learned.” And he said, “Well, why don't you ask the chatbot to finish the article?” I said, “That's brilliant!”
So, I went and I did. And everything it said—it wrote a pretty decent conclusion, but I disagreed with everything that the chatbot said. So, then I wrote another conclusion. This essay has two conclusions. One was written by a machine. The other was written by me, where I really nitpick. And the upshot, it's not just about the AI hallucinations that are in there, but just how humans have to fact check and be on top of. So that's my first interaction. That's a long answer, but that was a very specific first interaction. It was a failure. I basically had about 60,000 words of transcripts with another college president, and I wanted help putting it into themes so that I could write an article about it. And it just fell flat on its face trying to do that.
[3:20]
Lemon: Okay, I appreciate it, not too wordy at all. And then I guess sort of a follow-up to that question—Obviously, things are always changing with technology, especially with the new developments that come with generative AI. So, I'm wondering, how do you ensure that you're always hearing what people at the college are saying?
Robinson: Yeah, well, there are a few ways. One is, this has been a pretty consistent topic at our Academic Senate. So, our Academic Senate has faculty and administrators. I'm not an academic senator, but when it fits into my schedule, I always like to tune in and see what the Senate is talking about. And at semi-regular intervals, faculty and administration have been having pretty good dialogue about artificial intelligence at the Senate. So that's one place.
We talk about it at our executive leadership team. And like I told you, just about every conference I go to—and I go to tons of conferences, that's a big part of my job, all over the country—there are sessions and keynote speeches. At the beginning, a lot of the keynote speeches about AI were not really helpful, said a lot of the same things, a lot of hyperbolic claims, nothing really actionable. I actually brought a book that I'll show you, and I think it's in our library, so you could get it. It's called Co-Intelligence by a guy named Ethan Mollick. One of the conferences I attend is the Achieving the Dream Conference. He was the keynote speaker. He's a business faculty member at the Wharton School. And it's probably the best talk about AI that I've heard about how, just in his viewpoint—and I know we'll get more into this—his viewpoint is just about everybody ought to be trying to leverage artificial intelligence tools to see how they can propel their own work. There are a bunch of ideas that came out of this, but the one idea that Mollick returns to is disclosing that you're using AI. And I know you, I think you have a couple of questions about that, but otherwise people are just hiding it. You probably see all kinds of online content that you look at and say, “That looks like somebody used an AI tool to put it together, but it didn't say that anywhere.” His point is, if you prohibit people from using AI, they will become what he calls “secret cyborgs.” They're just going to do it anyway and pass it off as their own work.
[5:40]
Lemon: Yeah, for sure. So, I guess that kind of leads into my next question. You actually shared with the Academic Senate too; you taught a composition class this fall.
Robinson: Last fall, yeah.
Lemon: Last fall. And I'm wondering, in what ways did you see students use generative AI tools to help with their classwork? And did you find that some ways were maybe more beneficial to learning than others?
Robinson: Absolutely, yeah. So first of all, I have to make a big disclosure here. I taught one section of English 121 for one semester. So this is a very micro case study of 21 students. You know, the first 15 years of my career, that's what I did all day, every day. I'm a veteran English teacher. All three of my degrees, including my PhD, are in English. But it's been a long time since I've been in the classroom. But I've lived through big technology changes. I was an English professor when students didn't have access to the internet. And during my career, then comes the World Wide Web and online research. And I was one of the first faculty members in the state to completely teach online in the early 90s. So I've been through technological changes before.
The short answer to your question is, I was surprised at how few students actually used some of the LLM tools. I went into teaching 121 again thinking, “Oh my gosh, I'm going to ground zero of AI,” right? That's what everybody's talking about. It writes your term papers. Nobody's going to need to have writing skills anymore.
"I was surprised at how few students actually used some of the LLM tools."
And this is going to sound really judgmental, but I had great students. But there wasn't anybody who touched a LLM tool when I looked at their essay and said, “Oh yeah, that's perfect. There's nothing you can do. You don't need this class. This tool has kept you from needing the class.” Everybody still needed to go through that process of becoming a writer, even the students who used it.
But one of my big takeaways was, there were—I'd say about half of the students just said they didn't want to use it at all, which is kind of cool. The students who did use it—we could have a longer conversation about this—but a couple students, in working with them in individual conferences and looking at their drafts, it seemed like the more they relied on an LLM like Claude or ChatGPT, the more problems it created for them. Because, my assignments were in iterations: “Here's a first draft. I want you to put it through another revision. I want you to share it in a peer review group in the class.” And while a chatbot is great at taking something and approximating a final product, it can't really do what I ask students to do, which is to, you know, engage in invention and revision and rethinking things. So, it wasn't the panacea that I thought it was going to be. And it didn't create the kind of problems I thought it was going to.
Lemon: Yeah, so tying that into, you mentioned, you said roughly half of your students in that class weren't interested in using AI?
Robinson: They didn't seem to be, Carson, because here's how I worked it. And you might know this, or maybe you've written about it in other things that you've been writing. But faculty at LCC have basically 3 choices. You can completely prohibit the use of AI. There's a middle ground where you can, where students can use AI tools with permission. And then the other one is sort of like Wild West, you can do whatever you want.
I chose that “Wild West” with this little provision. Just like I asked students to cite if they used information from a book or a magazine or a video, I said, if you use these tools, you have to cite them. So every paper, no matter what it was, had a generative AI citation page. And you had to say which tool you used and how you used it. When we got to the research paper, not only did you have to disclose that, by then I asked them, I said, I need to see the chat transcript, right? So they had to turn that in. And that's how I knew that, you know, because right at the end it would say generative AI citation, and they just put, “I did not use AI tools for this assignment.” So about half of them didn't.
[9:57]
Lemon: Okay. And I'd like to get back to that citation in a little bit. But first, there was an Instagram post that came out maybe a month ago for the new LCC Stars mascot. And that garnered a bit of backlash from the community. So I'm wondering what was your initial reaction to that backlash? And was there anything you took away from that?
Robinson: I took a bunch away from it. I learned a lot. So just for your listeners, and are people going to listen to this or is it a transcript that you're going to put into both?
Lemon: Both.
Robinson: Okay, great. So, if you're listening, or for Carson's article, here's what happened. In the development of the LCC Athletics mascot, which now has the name Lance—students named the mascot Lance—in the early kind of invention ideation stage, I played around with some visualization tools in AI the same way you would say, “What would brown drapes look like in this room?” “What if I took the couch and moved it over here?” Because I'm not much of an artist. I actually brought my sketches for you, Carson, like this. I'm not a great artist.
Lemon: Oh, I love that though. That's awesome.
Robinson: So these are pretty good, but what I brought for you is, you know, just to see, because I'm not an engineer, nor am I a seamstress. I took costume design in college because I was a theater major. But this is what I used an LLM for. I said, “Take my drawing and show me what it looked like if it was a costume.” And then I changed the prompt and said, “Well, make it a little more aggressive,” or “athletic.” And then this one, I said, “Well, our school colors are this.” So I used it in that way. But by the time we got to rolling it out on social media, all of the images were done by human designers. Not me, I'm not a designer. But they were still talking about the fact that I'd use these tools.
So I said, let's do what I taught my students to do. We'll put a citation that says the early concept generation—I forget exactly what it was. But it was me who said put it there. Cite that. It says, in the early phases of this, we used some AI tools. And there was a very strong backlash, even though I got on Instagram and said, “Look, you're not looking at AI images. These are real human constructions.” But it seems like just the mention of AI was like a poison pill for some people. And we got- It was some pretty heavy backlash for some people.
"By the time we got to rolling it out on social media, all of the images were done by human designers."
[12:26]
Lemon: Yeah. Why do you think that was? Why do you think it's such a volatile topic for many people?
Robinson: Well, it's interesting. And I don't know if we have time to talk also about data centers, you know, or the environmental impact of artificial intelligence and data centers. But I think that there are a couple of words that are really divisive, or phrases that kind of put people into two camps: “data center” and “artificial intelligence.” And I think that people just had a very strong reaction to that. They didn't even read the whole sentence, I don't think. The sentence made it clear, you're not looking at an AI image. But just the fact that those two words were there kind of set some people off. And one thing I learned is that, I see all kinds of content that, based on my experience, I know are Claude- or ChatGPT-generated images, but it doesn't say, “This was created with AI” and people aren't, you know, pouring haterade all over those.
So, I think it was the disclosure, in one sense. And I understand some of the fear, right? Particularly creatives, musicians, designers, artists, have a well-founded fear that AI might replace their intellectual and artistic work. And also, there's a certain amount of theft, or the fact that the LLMs train on, basically, images out on the internet. So that it's some kind of appropriation. So I understood it. As you mentioned, I got on Instagram right away. I actually had a couple of students write to me, and I wrote them back and I think they appreciated hearing from me, where I said, “Here, this is what I was talking about. I was basically using these chat bots like you would a graphing calculator, you know, to do some math. The end product is still a human design.” Like I said in the post, the actual drawings and the actual costume, I mean, that's all made by humans. That's a real human in the suit. So.
Lemon: Yeah.
Robinson: At any rate, that's my perspective on it. I learned a lot.
[14:38]
Lemon: Yeah. So it sounds like you understand that some student concerns around the use of generative AI are valid. And you're not alone in this. Like, there's a lot of people in higher education that just want to explore how AI can be used within the field. And so I'm wondering, how do you reconcile those two, those two opinions?
Robinson: So it's like a lot of complicated things in life. I think there's a continuum rather than a binary, right? It's not like AI is good, AI is bad. There's- It's more complex than that. There are shades of gray and it all depends on context. So, you know, a couple of thoughts about that. As an institution of higher ed, we do have a role in protecting intellectual property and making sure that creative and artistic work is grounded in humanity and that there's kind of a personal and experiential aspect to everything that we do.
On the flip side, though, we also have the responsibility of preparing folks for what the work world is going to be like and what their job is like. And if we graduate folks who don't know anything about how to use these tools, we're probably doing them a disservice. So we have to do a little bit of both. And I think there's a lot of nuance there.
And as I listen to faculty conversations- Because, full disclosure, I'm not a faculty member anymore. I will teach a class here and there. I was full-time faculty for 15 years. But the thought leaders in areas like this, our faculty members, I think, are approaching this from a nuanced way, where you probably have some folks who are saying, “I'm never going to let students use these tools. These are bad. They're bad for humanity. They're bad for the environment.” And then on the other side, you probably have faculty who are just as ardent in saying, “Look, this is the future. This is how we're going to solve big problems.” I think at the college, we've got a pretty good approach to that.
[16:35]
"As an institution of higher ed, we do have a role in protecting intellectual property and making sure that creative and artistic work is grounded in humanity and that there's kind of a personal and experiential aspect to everything that we do. On the flip side, though, we also have the responsibility of preparing folks for what the work world is going to be like and what their job is like."
Lemon: Yeah, and that actually ties into another question I've got. When I was speaking to Provost Welch earlier this month, she mentioned that there is a big push right now from employers, that they want people who know how to use these generative AI tools. But I'm curious, what does that look like? I mean, what does being proficient in using generative AI, what does that look like? How do you teach that?
Robinson: That's a great question, and I think it's an open and developing story, right? You know, we have faculty members who are subject matter experts in deep learning, machine learning, and artificial intelligence. There are classes you can take here at LCC. And there are folks who are subject matter experts in other areas of the academy that know a lot about AI. I think it's an open question. I know we don't have a list of, like, AI competencies the way we do about, say, writing or mathematics. Those will develop.
I think back to the beginning of my career when, like I said, the internet wasn't really a thing when I started teaching. And then just a few short years into my teaching career, I was teaching online. I was the guy who- I still have my notes from the workshop I gave to my colleagues. I'm 20 something and I'm teaching the older faculty: what is the World Wide Web and how can you use it? I mean, it's just a new tool, right? And we're going to have to figure out- Back then in the early 90s, we were having the same kind of conversation about, like, are we going to teach people to do computer-based, internet-based research? Do you want to hire somebody who is fluent in technologies like e-mail and spreadsheets? Or are we, “No, we want to stick with slide rules and little memo envelopes”?
"The AI, I think, has the potential to be a really disruptive technology. But if we're being honest about it, so is the internet. The internet completely upended."
I just think it's the next iteration of technology with a little bit of a twist that we might get into. The AI, I think, has the potential to be a really disruptive technology. But if we're being honest about it, so is the internet. The internet completely upended. Think about Amazon, think about Netflix, right? We have Netflix now. We don't have Blockbuster movie stores, right? We have Amazon now. The Sears stores are empty. Stuff like that will happen with deep learning and artificial intelligence.
[18:52]
Lemon: Yeah. And so, then I'm curious, as president of LCC, you are in circles and maybe in conversation with other leaders of colleges around Michigan. And I'm wondering if you can clue us into what's being said by other administrators and leadership at colleges around Michigan in terms of generative AI use?
Robinson: Yeah, so, obviously I talk to my president counterparts at the other community colleges in Michigan and across the country. This is a huge topic at all colleges. Now, like most things, LCC is kind of a leader in things. So, we're probably not the vanguard leader of community colleges with AI approaches and policies, but we're in that upper echelon of colleges that are thinking about it and teaching it and doing stuff with artificial intelligence.
And like I said, when I think back to the beginning of the internet, it was important for the president of LCC at the time to be thinking- The building we're in is named after Abel Sykes, who was the president of LCC from 1989 to 1999. Think about what happened in that time period. Now, when he landed here, none of the faculty had computers on their desks. Maybe the computer science faculty did. And by the time he retired from LCC, you couldn't run a college without computers and the internet. I do think that that's kind of what we're looking at.
And so, what I see colleges grasping with is what does this mean for policy at the college? What does it mean in the classroom? And I'll end by saying this. It's one of my goals- I have to set three goals with our board of trustees every year. One of my goals this year in 2026 has to do with artificial intelligence, and specifically getting up to speed with how it impacts the college in three ways. One is the one we've been talking about. How does it impact the classroom? How does it impact teaching and learning and assignments and the things that students need to know when they finish a class? Second, what does it do to our own enterprise, our workforce, right? Are the folks in Financial Aid going to start using AI? Are the folks who do marketing going to use AI tools? And one of the scary things about that is, what does that do to our workforce? Does it mean that one person can do the job of 10 people? So far, we're not seeing that. But that's an impact. And then the third one is, how does it impact the world that we're preparing you and everybody else to go out and work in? That's why I'm sort of an opt-in kind of person, is that, you know, if you're going to communications or PR, you're in some way going to have to encounter how do you leverage bots, AI tools to get your work done because you're competing with people who are.
[21:34]
Lemon: Yeah, for sure. And I guess you're talking a little bit about employment changing and, one person in five years, being able to do the work of 10 people now.
Robinson: I think we don't know that yet, but yeah.
Lemon: So are there any other sort of ethical dilemmas that you have run into, you or other people on your team? And like, what is your response to that? How do you go about thinking about this?
Robinson: It's huge. So I mean, I think we need to approach all issues and particularly technology issues from an ethical standpoint. And one of the things we talk about with data centers and artificial intelligence is environmental impact. That's one. Okay? Then the other one is this idea of ownership, intellectual property. That's another. And then so much of this has to do with people's own job security, right? So, there are any number of ethical issues.
"I think we need to approach all issues and particularly technology issues from an ethical standpoint."
The one that I think about a lot, and it kind of dominated our discussion about data centers here in Lansing with the Deep Green Project. There was a company from Manchester, England, that was working with the city to put a data center downtown in Lansing. It was going to use less water than a- than a McDonald's restaurant. But there was a strong backlash in the community against data centers, against artificial intelligence, and the company ended up actually pulling out.
And I actually brought an article for you that I saw a couple of weeks ago. Another student newspaper at Princeton, the Daily Princetonian, the student author, opinion. This is a great headline: “Guilty about your ChatGPT environmental impact? Eat one less burger instead.” I've been tracking this for a long time. There's a project called the Peanut Butter and Jelly Project. And I will get the number of gallons of water wrong. But if you eat a peanut butter jelly sandwich instead of a hamburger, you're saving hundreds of gallons of water, right? So, the idea that this student journalist at Princeton, his name is Jack Thompson- I'm reading right from here. It says, “You'd have to prompt ChatGPT 3,600 times to release as much CO2 as running your clothes dryer for an hour. Or you'd have to prompt Google Gemini 1,500,000 times to consume as much water as you would if you ate a hamburger.”
So, everything's an ethical choice. I think just because AI's new, we're thinking, is it worth the environmental impact? But your Starbucks coffee is an environmental impact. Golf, for example. So the student ends here saying like, if you're going to be mad at people for using AI, you should also be mad at them for eating meat and playing golf, because those actually have a bigger environmental impact. That doesn't mean that the impact of data centers and AI is something we shouldn't think about. We absolutely should.
[24:20]
Lemon: Yeah, for sure. And I'm wondering too, you've been very outspoken about your support for exploring the uses of generative AI in higher education. Do you worry that, you know, as president of LCC, that might influence faculty or other administrators to maybe, you know, pipe down or maybe not voice their opinion as much if it disagrees with that of yours?
Robinson: Yes, yeah, but it's going to be a “Yes, but.” So first of all, that's something I worry about everything I say. So here's what I learned. I mean, I was a faculty member for 15 years, always had an opinion, but it was everybody was always saying, “Well, that's that English professor's opinion.” But then all of a sudden, you have a dean's opinion and a vice president's opinion. Now you got the president's opinion. So the short answer is yes. I think we have a culture here at LCC where it's okay to disagree. It's okay to have a different viewpoint. I say that in arguments or discussions, dialogues.
Look, it's okay if you have a different viewpoint than mine. But the short answer is sure. Taking a lean in, you know, try to use it, experiment stance with AI. You know, I think about, “Am I putting undue pressure on folks to adopt the same thing?” I will say this though, Carson. Colleges are places where there are a lot of safe places for people to have other ideas, right? My whole career—and I've been at three amazing community colleges—not only is it sometimes okay to disagree with the president, it's kind of cool to disagree with the president. And that's one of the great things about academic culture.
So yes, I think about that and worry about it, but I also know that we have a robust, you know, academic freedom–based culture where it- I've never heard anybody, not that they would say it around me, it's like, “Well, I think I should have this opinion because the president has this opinion.” I hope we don't have that kind of place.
"Colleges are places where there are a lot of safe places for people to have other ideas, right? My whole career—and I've been at three amazing community colleges—not only is it sometimes okay to disagree with the president, it's kind of cool to disagree with the president. And that's one of the great things about academic culture."
[26:17]
Lemon: And then I'm wondering too, I was doing a little bit of reading, I think from the Chronicle of Higher Education, and they did an issue on artificial intelligence, generative artificial intelligence. And they talked about the things that AI can maybe automate for us. So I think the example was, like, writing a formal e-mail. That's something that we still maybe are learning a new skill from or maybe reinforcing an old skill. Do you worry about the things lost when you start to use AI to automate some of that? And maybe how would you counteract that?
Robinson: Sure. No, I do. And, you know, full disclosure, all of my degrees are in composition. I mean, like I have an undergraduate degree in English literature, my master's degree was in teaching writing at community colleges, I wrote my Ph.D. dissertation about how to teach writing. So teaching writing and the writing process are really important to me.
So sure, I'll give you an example. The first time somebody really showed me an LLM on a screen, they were all excited and said, “I don't have to write letters of recommendation anymore,” right? And this is- I'm not going to say which educational institution this person was in charge of. They're actually not in charge of it anymore. They've gone to another community, but this person right on their screen in their office said, “Look, this student wants a letter of recommendation. This was going to take me 1/2 hour. Watch, this will do it.” And I was horrified. I was.
I'll tell you this, I write letters of recommendation for students all the time. I like the process of doing that. I'm pretty good at it. It doesn't take me a long time because I've been doing it a long time. And there's something about the process of doing it that's important. I think, built into your question is, there might be some more transactional types of composition where it would be okay.
And I actually have one I'll share with you. I made a really dumb mistake when I was giving a talk in another state. And this is not for LCC, it was outside. I booked two hotel rooms on the same night. So I'm kind of in a pickle, right? And the one hotel was not going to work with me to cancel it. And I was pretty angry about it. And it would have taken me a long time to get in a head space of writing a complaint letter. So I actually used the chatbot to say, look, here's the situation—literally typing this—here's the situation, typed out exactly what happened, and I'm literally too angry to write a nice complaint letter. Write a first draft of a complaint letter that is designed to get me the result that I want, which is them to reverse the charges for this. And it did it in a matter of seconds on my phone while I'm sitting in the airport on the way home.
I don't think the hotel that I was mad at deserves my full attention and my human, you know, composition process. And so to me, that's a great example of automating something that doesn't require that kind of human touch. But Carson, if I were going to write you a letter of recommendation, I wouldn't even think of using a machine to do that, because I actually enjoy the process. Does that make sense? That was kind of a case study of those two things. I could probably come up with others, but that's a great example. I think we just have to discern what requires that humanity, what doesn't. And I think it's like transactional versus meaningful. That's just my take on it. And that might change over time.
Lemon: No, yeah. So I mean, obviously there are examples or hypotheticals, but so like, you know, maybe you lose out on sharpening your argumentation skills when you have AI do that for you, but you save time and, maybe save time for a real human connection?
Robinson: Exactly.
Lemon: Okay.
Robinson: Exactly. And in that case, like I said, it would have taken me a while to be emotionally intelligent enough to write a good letter. And it really helped me.
[30:21]
Lemon: Yeah. I feel like we- It's been talked about a lot. We're in like an age of loneliness.
Robinson: Yeah.
Lemon: So a lot of people are very disconnected. And so a lot of the suggested ways I've been
seeing AI, generative AI to be used here is, “Oh, have it, you know, edit a paper.
Have it brainstorm project ideas.” But I worry that might cause a lack of connection.
I mean, if somebody goes to a chatbot for essay revisions, that's them not going to
the Writing Center. Or if somebody is asking the chatbot for project ideas, you know, that's them
instead of asking a friend, you know, talking to the computer. So, how do we navigate
that? You know, is it worth it, that time saving, if we're losing out on real connection?
Robinson: Man, what a great question. And you've married two really big, important issues in society in this time period that we're in. This idea of isolation and loneliness, which by the way, is technology driven and predates LLMs. I mean, like paradoxically, being connected through the world wide web and the internet has brought people together, but also pushed them apart, right? So that's a super important thing, what you called loneliness. And then this automating or like interacting with a bot, right, could be a real dangerous connection.
This is not an area of expertise for me. Every time I've tried to leverage an LLM to give me sort of personal insights or even help with writing- Like in this book, Malik, in one of the chapters, talks about he leveraged AI tools to help him write the book. He created different personas of editors. Like, he created a bot that was like the mean editor who like took out the red pen and kicked his butt about what he's writing. And then the nice supportive bot that said, “This is great. Do more of this.”
Every time I've tried something like that, it just hasn't felt right. But that's like- I come out of the English department. I come- I worked at a writing center forever. I haven't gotten it to work. But I read about, you know, people getting really friendly with AI bots or having AI bot therapists. And I just don't know where that's going. It's not an area of expertise. Every time I've tried it, it's not been as effective as some of the more transactional things that I've tried with it. I think it's a really insightful question, though. Could it accelerate some of this loneliness? I think it could.
I think you also- One of the things—because I pay a lot of attention and read a lot of articles by journalists about AI, listen to podcasts and TED talks about it—and prominent journalists who write on this topic all have the same story. They say, "People come to me—and the “me” being the journalist—with stories of kind of AI psychosis, where they have talked themselves into thinking that they solved some unsolvable math problem, or that they've diagnosed a medical condition that doesn't exist." And that's pretty maladaptive.
"Just like any technology, I think, does it improve your life? Does it take away from your life? Does it take away from your social connections or does it help build your social connections?"
I think, just like any technology, I think, does it improve your life? Does it take away from your life? Does it take away from your social connections or does it help build your social connections? That's my thought.
[33:42]
Lemon: Yeah, and then circling back to that citation method we were talking about earlier. So, you know, it's been a lawsuit. The New York Times is taking a lawsuit against Open AI, which is, I think, set to go to court either this year or the next, essentially citing that the model would produce works that would very much, very similarly, mimic writers from the Times. So I'm wondering, you know, current practice at LCC is if you're using the generative AI to write some parts of your essay for you, you cite that. You say you used this. So, you know, maybe it's not plagiarism. Is it one step removed?
Robinson: Well, this is a fascinating question because, and it requires- I know enough to have the conversation, but I'm not an expert in it. The way I understand LLMs and how they work is they work on this technology called transformer technology. They're not actually going out and looking at the fiction of Henry James and the Virginia Woolf poem or something like that. Those transformers trained on that stuff. They culled general principles or algorithms from them in kind of a neural network. And then they craft unique novel responses based on the rules that they trained on. Now, somebody who's a computer scientist might say, you know, Robinson got that all wrong. But that's how I understand how LLMs work.
Now, I think it's still an open question whether the stuff that the transformers were trained on is being appropriated or inappropriately taken, used, stolen. Somebody as a student of popular culture, I have kind of- I don't have an answer to this question, but my difficult question about that- I said, “So is that true of the Beatles, who took a lot from American blues and Motown? And when Paul McCartney says, “I want my bass to sound like that guy who plays bass at Motown,” is that theft? Was it theft when Led Zeppelin used a riff from a blues guitarist that they heard? Is hip-hop theft because they sample 70s R&B? And I think there's a postmodern idea of, is authorship really even a thing? You know? Like for example, in the Renaissance and earlier, you know, a lot of people didn't sign paintings, or paintings were done by groups of people.
So I'm not saying authorship and ownership of intellectual property doesn't matter in our society. It does. You put your name on something, you own it. If it generates income, you ought to be able to capture it. But I'm fascinated by how the technology interacts with what we understand to be ownership or intellectual property. So, I don't have a great answer other than it's super complicated. And one of the things that, you know, most of the experimentation I do with AI has been on non-LCC stuff or things that, you know- Like I don't feed any, if I were writing- That's another reason I wouldn't use an LLM to write your letter of recommendation. I'd have to put information about you into the chatbot. I don't know where that's going.
"I'm fascinated by how the technology interacts with what we understand to be ownership or intellectual property."
Now, I've been assured that it doesn't stay there and gets- But I'm not going to take that assurance. And we're certainly not going to upload financial aid files to- You know, right? So I think it's complex. You've asked a great question. I don't know that we have a real super specific answer, except for it's a new frontier.
[37:28]
Lemon: Yeah, for sure. And I like all those examples you gave, you know, the musicians. It's all- Most of those have gone to court. And it's a very complicated and nuanced precedent that's set, which is ironic considering our jumping off point for that is a soon-to-be court case.
Robinson: Yes, exactly. Who knows how it's going to go? And I think what's going to be interesting is, in that courtroom, you're going to have computer scientists trying to explain how transformers work and what an LLM is. And at the end of the day, what's going to be funny, it's going to be uninformed people like me, a judge or somebody like that, making a decision based on what somebody taught them about how the whole thing works.
Lemon: Yeah. And then also, you touched on this a little bit ago about how, you know, we know that AI has been putting a strain on the environment.
Robinson: Yes.
[38:14]
Lemon: I was just reading a New York Times article about a family in Georgia. Their well ran dry after a data center was built near them. And I'm wondering that, you know, if LCC is promoting the use of these generative AI tools that are putting a strain on natural resources, maybe not as much as other impacts, does the college still have an obligation, you know, to fight these harmful impacts? Do they have an obligation to put a net good into the world?
Robinson: Yeah, absolutely. But I think it's more complicated than a yes, no kind of situation. I'm glad you ended the question with public good. You know, there are a lot of colleges and universities that have that phrase in their tagline or their mission statement. It's certainly true of LCC. We want to be a net positive for the community.
"We want to be a net positive for the community."
I tell people, since 1957, we've been in what I call the social mobility business. Whether you want to get a transfer and get a bachelor's degree or go into one of our occupational programs where you're going to earn more, most people come here to make their lives better and have a better future, right? Pretty much everybody. We certainly want to do that to the community. We don't want to drag the community down.
Now, I mentioned Deep Green before. This is a very local thing. This is going to be like three blocks from here. It's not going to happen now. That's a data center that, even though it was going to have a very small environmental impact, it was going to use, like I said, the amount of water that a fast food restaurant uses. And you wouldn't want to put up picket signs when there's a new fast food restaurant. In fact, people get really excited about that— some new chain coming to town, Shake Shack or Raising Canes, or- Some people get excited about that. But those places use as much or more water than this data center was going to use. But I think the artificial intelligence data centers are in people's consciousness. There was a huge pushback. At Lansing City Council, there was hours of public comment saying, we don't want this here. We don't want it to happen.
From the college's perspective, that Deep Green project would have benefited the college. The way the plant was going to work is they were going to cool the CPUs of those computers down with the city steam process they have at the Board of Water and Light, and it was actually going to be a net negative in terms of the electricity impact. That's the other thing. You mentioned a well running dry. Communities have seen energy prices go up or energy availability go down. So it's complex, but I think the way I would say it is that it's not cut and dried what's an environmental negative and what's an environmental positive.
From my view, and there are a lot of people who had different views, that Deep Green project would have been a net positive for the city in a parking lot that's close to a factory that would have kind of put Lansing on the map as like, if you're going to do a data center, why don't you do it in this environmentally friendly way like the people from Manchester, England, do? But our community spoke, and they said, “We don't want that.” So I think it's complex, but this problem's not going away. Our need for data processing is going to increase, and it’s something we're going to have to deal with.
"It's complex, but this problem's not going away. Our need for data processing is going to increase, and it’s something we're going to have to deal with."
[41:33]
Lemon: Yeah, well, that brings me to the end of my questions, but I did just want to give the floor to you. Is there anything else you'd like to add? Anything you felt like you didn't get to speak on?
Robinson: So, you asked amazing questions. And so I think we had a comprehensive conversation about it. The only thing we didn't get to, which is part of that- I talked about my three goals, how AI impacts the classroom, how it impacts the enterprise we call LCC, and how it impacts the world that we're sending everybody out into. One thing we didn't talk about—and some of it sounds like science fiction in that last category—is this is one of the things that makes deep learning, machine learning, artificial intelligence different from the rollout of the internet, is this does have the potential to be a really disruptive technology. And when people start talking about this, they start talking like a dystopian scientific movie, right? You've probably seen this. What happens when we reach something called AGI—artificial general intelligence—where we build a computer-based bot that is smarter than you and me? It can do anything.
And it's the first time in our knowledge of the universe that one type of being creates a being that's better or more dominant than it. Right? This is the plot of the Terminator, right? This is the plot of Hal the Computer in 2001. And, there are dystopian views of this, what this does to the world. There's a gentleman who wrote a book about this called, If We Build It, Everybody Dies, right? That's the dystopian.
"That's the dystopian. There's also a utopian view of the world."
There's also a utopian view of the world. Like, for example, some of the bigger problems in our society that we haven't been able to solve as humans, like climate change, hasn't been the will to solve climate change. One of the things I think about is all the junk in space. You know, 50, 70 years of junk, garbage floating around in space, the big garbage gyre in the Pacific Ocean. These are things that I think machines that never take a break and can exponentially think faster and better than we do might be able to solve some of these big problems that humans haven't been able to solve. Certain diseases like cancer. Some people are really focused on longevity. I think that that's sort of the utopian thing we haven't talked about. And it's one of the things that I think we're going to see in our lifetimes, this technology, unlike the previous technology, like the steam engine or the internet, it has a potential to really, really change human existence in a way that I don't know if we're prepared for. So we didn't talk about that, and I'm not an expert in that. So I don't have much to say except for putting it out there that that's something we need to consider.
Lemon: Yeah, no, I like that. And I think it is very fitting, with a lot of news today is this could be really, really good or really, really bad.
Robinson: Correct. Yeah, exactly.
Lemon: Yeah. Well, I appreciate- Again, let me thank you just so much for taking the time out of your day to come to you.
Robinson: Oh, I loved it. You're a good interviewer, and this was a fun conversation.
Lemon: Well, thank you. And I think you, I mean, you don't need me to tell you this, but you had some really impressive answers.
Robinson: Oh, thanks. I've been thinking about it. I hope I had impressive answers because it's something I'm focusing on.
Lemon: Yeah, no.
Robinson: Thanks a lot.

