TOP
  /  Featured   /  AI and Academic Integrity
AI banner art

AI and Academic Integrity

By Sarah Achenbach
Nora Kizer Bell Provost Laura McLary as envisioned by DALL-E.

Nora Kizer Bell Provost Laura McLary as envisioned by DALL-E.

By identifying patterns to create needed interventions, artificial intelligence (AI) helps humans solve complex problems like improving cancer screenings, identifying diseases, or saving the world’s bee population.

It’s also impacting college campuses across the U.S. Administrators and faculty are wrestling with balancing AI’s promise while maintaining academic integrity. Until recently, AI was still mainly and firmly planted in the “someday” realm for most campuses. The November 2022 launch of ChatGPT (short for Chat Generative Pre-trained Transformer) changed all that.

Launched by OpenAI, a private research lab focused on developing safe AI to benefit humanity, ChatGPT is a large language model. All users need to do is go to chat.openai.com or Microsoft Copilot (the AI chatbot formerly known as Bing Chat), register with an email address, and type questions, called prompts, in the message box.

Users can ask for travel recommendations or type in the ingredients in their fridge for menu suggestions. They can also type in “What are the allegorical examples in As You Like It and how do they relate to the play’s main themes?” Or “write a 1,000-word essay about the causes of the Tet Offensive.”

In the time it takes to read these last four paragraphs, ChatGPT will have scoured the entire internet, reviewed all publicly available texts, and, using complex mathematics to connect different digital breadcrumbs, spit back an answer to the user’s prompt.

If that sounds like ChatGPT will research and write a paper, it can. Users can even request a desired style, length, format, and details. Imagine the temptation during finals week when a student is staring down three research papers, none of which are started.

This new technology has challenged colleges and universities to figure out how to guide students using AI to enhance their own work, not replace it. Hollins’ current academic integrity policy states that anything a student generates using AI that is not cited as such is a form of plagiarism. So far, there have been no Honor Court cases related to AI.

AI created male student at computer

Image created with DALL-E.

But there’s no way for faculty to check that a student used ChatGPT, no database to run an assignment or take-home essay through to see if it was AI-generated. And technically, ChatGPT-created work is not original. AI merely parrots back what has already been written and is publicly available on a topic.

“Our faculty is actively grappling with [AI],” says Nora Kizer Bell Provost Laura McLary.  “Right now, they decide individually to what extent students can or can’t use AI. Some faculty are embracing it in different ways, while others are choosing to restrict or ban its use altogether in their classes.”

To help faculty better understand AI, McLary has provided resources curated by Sara Sprague, digital pedagogy and scholarship librarian at Hollins’ Wyndham Robertson Library. This past fall, McLary tapped Vladmir Bratic, associate professor and chair of communication studies, and Giancarlo Schrementi, assistant professor of computer science, to lead several voluntary professional development seminars on AI, which continue this winter and spring. She’s also created a new faculty and administrator task force to explore deeper professional development and student educational opportunities like a possible course on the ethical implications of AI and including the topic in general education courses.

“How can we get students to think alongside technology? We have responsibility to teach students how to embrace new technology.”

Vladmir Bratic, associate professor and chair communication studies

“[AI] is ubiquitous, and and so we want to encourage faculty to have conversations with their students and some engagement with ChatGPT,” adds McLary.

She knows students are grappling with the ethical ambiguity of ChatGPT and are looking to professors for guidance. “We know when they get out into the work world, AI is everywhere. We need to provide students with guidance to navigate that world.”

McLary, Bratic, and Schrementi agree that for college students today and the coming generations, it’s not a question of whether AI will be a part of their daily work life. It will be.

AI is being used across all industries – business, medicine, art. (I used it to transcribe every phone interview conducted for this article, which saved me hours of tedious work.) We can argue boon or bane, but it’s not going anywhere. There are 100 million weekly users of ChatGPT – one of many generative AI tools today – and nearly 1.5 billion monthly users. (While free, OpenAI now offers a subscription version for $20/month).

“Four years from now, there will be zero bosses who will be saying ‘Can you do this thing for me, but you cannot use ChatGPT,’” says Bratic, who explains that’s the equivalent today of not allowing someone to use Google for a job. He’s been studying AI for the past decade and has his eye and curriculum on the current, first-year Hollins student who is going to be two years into a new job in 2030. “How can we get students to think alongside technology?” Bratic asks. “We have responsibility to teach students how to embrace new technology.”

McLary concurs: “How do we give them the tools that they need to engage with AI critically? It isn’t simply asking ChatGPT to write a paper for you. It’s about teaching students to create better prompts to help edit or find more precise language for a paper they’ve written, even to coach and tutor students to deepen their thinking.”

Hollins’ small class size and focus on student-faculty relationships are advantages to this approach. “Our faculty have their eyes on every single student,” McLary says. “They know them as individual people. I think it would be incredibly challenging if we were in an institution with really large lectures. How do you monitor that? In some ways, [Hollins] is exactly the right place, the right environment for having really deep and engaged conversations.”

Some colleges and universities have banned the use of AI, while others, like Hollins, are beginning conversations about how to incorporate it appropriately into deeper student learning. Everybody’s playing catch-up.

AI image of a diverse classroom scene with young women of various ethnicities, representing college students, focused on taking a test.

DALL-E’s interpretation of female college students focused on taking a test.

“When ChatGPT came out, everybody was unprepared for it,” says Bratic. “Academia is slow to turn when it comes to embracing new technology. Thinking defensively is the most reductive way to think about new technology.”

Last July, he wrote in Faculty Focus, an online teaching resource, that integrating AI “… into traditional human knowledge acquisition is essential to the future of universities.” In the article, he argues that “the first step might be to accept the inevitability of technological advancement and to embrace collaboration with technology and robots (cobotics) as to combine the strengths of both humans and technologies.”

Bratic has spent the past few years leading up to ChatGPT’s late 2022 release heralding its reality. In the AI awareness seminars, he works to guide colleagues who’ve never used the technology (the majority of Hollins faculty, he notes) to understand it. Understanding is the first step, Bratic explains, in developing Hollins’ systematic response to AI.

That response may indeed require a shift in pedagogy or teaching methodology. “If my class is set up in such a way that a final exam or major midterm paper can be answered by a simple prompt to ChatGPT, the problem is not with the technology, the problem is pedagogy,” says Bratic whose exams are open book, notes, and internet. “I encourage my students to lean on technology to help them solve a problem, to use their brain for analysis or synthesis for the interpretation for contextualization. These are the skills I want them to have.”

He encourages his students to check their answers with ChatGPT and compare the results to examine the efficiency and inefficiencies in their own thinking. “If I assign an outline, I ask them to feed their outline into ChatGPT and ask the same questions [the student] would ask of me, then come to me to compare and discuss the answers,” Bratic says.

McLary sees the coming paradigm shift in the classroom. “Traditionally, higher education’s curricula are focused on content delivery, the lecture mode where students summarize [information] back to the professor in a paper,” she explains. “How do we then reshape our classrooms?”

Project-based learning and other newer teaching approaches are finding their way into classrooms and labs across the country. McLary cites flipped classrooms, where students do readings and work prior to class then spend class time engaged in hands-on projects and collaborative problem-solving with peers. AI, she says, can be a valued partner in these newer approaches. “This gives us the chance to shift the balance toward more dispositional learning, for example, solving complex problems or working collaboratively, rather than solely on content,” adds McLary of the need to incorporate multiple modalities beyond lectures. “This way, we can put the emphasis on student engagement.”

Linh Nguyen ’24, a mathematics major and data science minor, is doing just that – embracing  AI as a tool. For her senior research capstone project, she’s created a bot (AI robot) that responds to natural (spoken) language to query private, not public, databases.

“To query a database, you have to learn programming languages like structure query language (SQL),” she explains. “We’re building a tool to allow someone to query a database without knowing SQL.”

Under the guidance of Schrementi and using ChatGPT, Nguyen has created her bot to understand prompts such as “Give me the number of math majors who also minor in the humanities” and then write the needed SQL query and process the results in a data table for the user.

The complexity of her project lies in the fact that OpenAI does not have access to Hollins’ or any other private database. “The problem we must overcome is how to have AI interact with our database and maintain data integrity and privacy,” says Nguyen, who plans to work in AI following graduation. “How do we get the results we want without exposing our database to the OpenAI model?”

Nguyen uses blockchain technology – a shared, immutable digital ledger that records and tracks assets safely and securely – to run the SQL query through the private database and to process the results. To keep OpenAI from gaining access to Hollins’ private data, she cuts off the loop before the bot gives the answers to ChatGPT.

AI image of students floating above campus

DALL-E’s interpretation of a diverse group of female college students joyfully flying through the air.

The impact of her project means more efficient, productive work. “This makes it more user-friendly for faculty and staff across campus who work with Hollins’ data,” Nguyen says. “Most people don’t know how to query a database or know how to code.”

She learned to code in Schrementi’s classes. He now allows students to use AI to provide code, standard operating procedure in the computer science field. “There are not a whole lot of variants with code,” he says. “It’s not like you’re writing a paragraph where there’s a variety of word choices. My focus is having them be able to translate their thoughts into code.”

For Nguyen, this process turns into a conversation. “I ask AI how I can use a syntax or algorithms, and sometimes it provides stuff from different libraries and different frameworks. When the code doesn’t work, I let OpenAI know that it’s not working, so it’s learning from me as well.”

DALL-E created squirrel graduate

DALL-E created squirrel graduate.

She’s seen friends use AI to expand their ideas for essays, literature reviews, and research papers, but admits the results are less than perfect. “I feel it’s really noticeable if you copy and paste the [ChatGPT] paragraphs into your paper because sometimes it uses really weird vocabulary,” she says. “I never use it for my math or statistic courses because there is zero chance that it would give me back anything. You can ask it, ‘what is one plus one,’ but our math is really advanced.”

“Soulless” is the word typically used to describe AI-generated writing, a critique with which Bratic agrees. For now. “Writing good fiction, music, poetry, and art are purely human characteristics, but these are all cognitive things that are going to be definitely outmatched by technology,” he explains. Bratic is teaching a spring survey-type course, Communication and Technology, exploring how technologies change societies from cave paintings to AI.

He cites Wikipedia, which “when it first came out, it was widely derided as bad. Now Encyclopedia Britannica is no longer published. Judge AI knowing that these things change and get better. I can guarantee that in ten years, it’s going to be much better.”

Schrementi acknowledges that there’s faculty concern about students’ use of ChatGPT for writing. He believes that developing a university-wide policy will be challenging. “The software out there for determining whether someone has used ChatGPT is just not good enough to use in any sort of Honor Court. There are too many false positives. It’s going to take a year or two before we really start noticing the kind of patterns in students using it.”

Mary Clare Abbott ’25, one of the three chairs for the Honor, Conduct, and Appeal Board (HCA Board), thinks about AI and academic integrity a lot. She’s not aware of Hollins students using AI to write papers or cheat on tests, but she’s heard indirect reports of faculty concerns.

“There are no good guidelines yet on how to use AI in the academic world,” says Abbott, who has begun meeting regularly with Michael Gettings, associate vice president for student success, about the intersection of AI use and the university’s longstanding and well-respected Honor Code.

Squirrel infestation of the Front Quad created with Adobe Photoshop.

“When the word ‘plagiarism’ was introduced to me in middle school, it was defined as taking someone else’s work,” she says. “AI is not technically someone else’s exact words. If a student comes through the HCA Board for [misusing AI], I wouldn’t really have a good answer for them as to how could they have done this differently the way that I do for other causes of plagiarism. Higher education is going to have to figure out a way to prevent AI from being used in dishonest ways, but I also think that completely taking it out of the picture and pretending it doesn’t exist would be a disservice to students.”

“I hope to see [higher education] find ways to incorporate this new technology, not as a substitute for learning, but as a tool to foster more learning.”

Mary Clare Abbott ’25

This past December, Abbott planned and led Honor Awareness Week, a program that coincides with each semester’s exams. This year, she included sessions on using AI as a tool to benefit education, encouraging students to follow their professors’ guidelines and providing resources to prevent students from using it in dishonest ways. She’s planning more campuswide discussions about AI use and having HCA Board members talk at a faculty meeting about how  students are using the new technology.

“I think [AI] is actually going to be very good, but I think it’s going to take us a little while to figure out how to use it,” she says. “Hollins is responsible for preparing its students for successful careers and life beyond graduation. I hope to see [higher education] find ways to incorporate this new technology, not as a substitute for learning, but as a tool to foster more learning.”

Schrementi agrees: “There’s going to be a period of adaptation. We’re just at the beginning of it, faculty and students.”

McLary shares their pragmatic optimism. “We will get to the point where we use it sensibly and where AI is seen for its value more than a threat,” adds McLary. “If we can help students create better AI prompts, then it could be really powerful.”

“While I think faculty integration of AI on campuses and in the classroom will be a difficult shift, and ultimately change is never easy, I do think current and future students will be better prepared for the workforce with the experience of using technology that is not going away,” Abbott reflects, her eye on the world that awaits her.

Sarah Achenbach ’88 is a freelance writer living in Baltimore.