ChatGPT on campus: Assessing its effects on college writing — and teaching
from Yale News
Yale’s Alfred Guy discusses the potential dangers and opportunities of the AI technology and how educators can utilize it to improve student writing.
Since its public launch in November, the platform ChatGPT has generated a tsunami of news analyses and online discussions about how it and similar artificial intelligence (AI) technologies might upend the world as we know it.
Given the app’s ability to quickly produce cogent summaries of knowledge and comparisons of different viewpoints — the kinds of tasks commonly assigned in introductory college courses — many university professors and scholars have wondered what its evolution might mean for college writing.
At Yale, instructors looking for advice about this and other AI-related topics have been coming to the Poorvu Center for Teaching and Learning, where Alfred Guy has taken a lead role in developing faculty guidance in response to ChatGPT. Guy has directed writing programs at major universities for 30 years, including two decades at Yale. At Yale College, he is assistant dean of academic affairs and the R.W.B. Lewis Director of Writing, overseeing courses across 40 academic departments and programs.
Guy, whose own research explores the link between writing and intellectual development for undergraduate students, is also director of undergraduate writing and tutoring at the Poorvu Center. In that role he manages more than 100 undergraduate, graduate, and professional tutors who consult with students by appointment or during drop-in hours. Last year, these tutors worked with more than one-third of the students in Yale College in nearly 10,000 tutoring sessions.
In an interview, Guy discusses the potential dangers and opportunities presented by ChatGPT and how educators can utilize the platform and other AI technologies to improve student writing.
It sounds like many educators have reached out to you about ChatGPT. What kinds of questions are they asking? And how do they expect it will affect their work?
Alfred Guy: It is very exciting to see a technological development that is, so far, generating talk about teaching and not just talk about preventing the use of this tool. When Wikipedia first came out, many college teachers banned it, and as it became easier for students to find sources on the internet, many teachers clamored for better plagiarism detectors. With time, it became a common assignment to write or revise a Wikipedia entry about a topic from the course.
So far, the conversation at Yale around AI writing has focused not on demands that the software be banned or that we find a foolproof detector but rather on the opportunities for new assignments and new ways to engage our students more deeply in their work.
In a recent panel discussion about ChatGPT, some Yale faculty members pointed out that AI could help a student for whom English is not their primary language. It might help them communicate with their professors more effectively while preserving their time and energy for academic work. Do you see this technology as an equalizer?
Guy: Ever since the pandemic pushed us into remote teaching, we’ve been understanding more and more deeply how equity issues impact learning. Access to a quiet workspace, or the chance to take a walk, or decent Wi-Fi, or a large enough monitor to read lecture slides — all of these differences in students’ at-home learning environments helped us see something we should have seen before — that accommodations, some of them quite minor, can have a huge impact on making learning more equitable.
So yes, I’m sure AI writing can equalize some otherwise fairly minor differences in student language use and so eliminate bias and reduce the impact of privilege. There are many occasions for writing that are essentially bureaucratic — rather than designed to facilitate learning — and I see mostly upside for students being able to use tools to make these writing occasions easier and fairer.
Media literacy — the ability to assess the credibility of information — is an ever-growing concern. Many observers have pointed out ChatGPT’s frequent factual errors, and others have worried that its answers may be prone to bias. Is there anything we can do?
Guy: We already know that people publish things with mistakes on the internet, that many websites suffer from ideological bias, and that the need to make money through “click baiting” means that thousands of websites deliberately distort the facts to make topics more sensational. Because it summarizes what’s already been written about, ChatGPT often reproduces these same problems, releasing error-laden or biased answers.
But these very limitations can provide an opportunity for teaching critical literacy. Some of the best assignments I’ve seen using the tool so far have asked students to review ChatGPT’s answers to questions that the students are researching, separating out things that sound likely from things that are actually true. Practicing this skill was valuable before ChatGPT — we all could use more practice and expertise critiquing information from the internet. But ChatGPT can produce even shorter and, in some cases, more cogent forms of what is nonetheless baloney, providing a rich opportunity to practice critical analysis.
Your expertise is in helping students strengthen their writing skills at a particular moment in their intellectual development. What is the connection between writing and learning?
Guy: People can learn in many different ways, but there’s a kind of learning that happens best through writing. When you review notes or re-read course texts, you have some new ideas about the material, but those ideas can get crowded out by your goal to commit the facts to memory. When you talk aloud about what you’re learning, you develop even more of your own ideas — and, as you phrase things in your own words, that synthesis helps you remember both the facts and the connections you’re making.
In writing, because you can see and easily review the past few things you’ve written down, you can make even stronger and more personal connections. You can synthesize five or six ideas that you’ve recalled, versus the two or three you can hold in your mind when speaking aloud. So it’s well established that people who write about what they’re learning develop more new ideas about the material and remember those ideas for longer.
That said, students at any age have enormous demands on their time, and they might be tempted to take a shortcut, like asking an AI app to write their first draft. How can teachers keep learners motivated and engaged?
Guy: Based on research about when students plagiarize (whether from published sources, commercial services, or each other), we know that students are less likely to cheat when they are pursuing questions they feel connected to; understand how the assignment will support their longer-term learning goals; have produced preliminary work before the deadline; and have discussed their preliminary work with others. Getting a head start on the paper and feeling connected to it through conversation are two ways that students can stay motivated for the hard work of writing.
In the long run, this tool will keep getting better and will get harder to detect. I don’t mean technologically — right now I can spot an AI text about almost anything I know something about — but those limitations will become less glaring.
When ChatGPT can write something good enough, the only reason students will have for not using it will be their own motivation to learn. So we better focus on and explain how that learning works, and how our specific assignments will help students learn to solve problems they really care about.
You were a contestant on “Jeopardy!” last year, an honor once shared by the Watson computer [a form of AI] in 2011. In the battle of AI vs. Human Cognition, who’s ahead?
Guy: Ha! I don’t know if Watson cognizes, but it can come up with most “Jeopardy!” answers faster than anyone — that’s why it could beat the GOAT, Ken Jennings. Besides, Watson is way faster at buzzing in.