Writing around an AI taboo—practical ways for teachers to incorporate AI into their classrooms

by Pelican Press
74 views 11 minutes read

Writing around an AI taboo—practical ways for teachers to incorporate AI into their classrooms

writing assignment
Credit: Unsplash/CC0 Public Domain

The ascendance of large language models like ChatGPT has all but wrought a collective existential crisis among writing instructors. Due to a rise in large language model-assisted plagiarism, student essays are no longer reliable indicators of ability.

How then, do writing instructors meaningfully assess their students? And with the labor of writing easily outsourced to a computer, why should students care about comma splices and semicolons? Term papers can be generated by AI, but the skills developed by the act of writing—thinking critically, conducting research, and arguing a position, to name a few—cannot.

As large language models become more sophisticated and accessible, equipping students with these vital abilities will require a pedagogical revolution.

This revolution has already begun; rather than banishing large language models from their classrooms, many writing instructors have invited the technology in, with an emphasis on critical engagement. Among them is Johns Hopkins University Writing Program lecturer Carly Schnitzler, who co-edited “TextGenEd: Teaching with Text Generation Technologies,” an open-access, peer-reviewed collection of generative AI-assisted writing assignments.

Schnitzler describes these assignments as “[offering] up text generation technologies as objects of study in a writing and rhetoric context … to be critically integrated into the writing process instead of taking over the writing process.”

The textbook is split into five sections—AI literacy, creative explorations, ethical considerations, professional writing, and rhetorical engagements—that together contain 34 undergraduate-level exercises, all of them successfully vetted in classrooms before publication. With biannual updates, the collection will keep pace with the rapid progression of text generation technologies and the teaching pedagogy that follows.

In January, Schnitzler presented on TextGenEd at the Modern Language Association conference in Philadelphia, where she encountered a receptive crowd of educators looking for ways to meaningfully integrate AI into the classroom.

“It was fun to highlight some of the assignments in the collection at MLA,” she says, “because a lot of the coverage of AI and large language models, particularly in writing-intensive higher-ed disciplines, has framed it as disruptive. But in fact, there are a lot of educators [using generative text programs] to help students in their writing processes—from the ideating stage to doing research and outlining and composing and revising.”

It is important to understand what TextGenEd is not. It doesn’t give students permission to offload their homework entirely to ChatGPT, nor does it wholly endorse large language models as a force of good. Instead, it asks students to rethink their use. One assignment, called “Generate and Enact a Writing Style,” tackles the difficult concept of style by asking students to generate multiple versions of a sentence to determine what makes them stylistically different. Another, called “Who’s Talking: Dada, Machine Writing, and the Found,” contemplates where and how LLMs fit into the found poetry tradition.

Schnitzler’s research background is in creative computation, which she describes as “artists and poets critically engaging with computational technologies in their writing processes.” She sees her pivot into writing pedagogy using generative text as a natural extension of that research.

“I think a lot of the [large language model] hype comes from the misconception that it’s a brand-new thing, when it’s really not. Found writing has existed for as long as people have been writing. Different techniques of generating text have been around as long as various computational technologies have been around. It’s a very historically situated technology, and understanding that historical context can only help smooth the transition into our classrooms and lives in general.”

Schnitzler spoke with the Hub about TextGenEd’s role in that transition.

Could you explain what it means to teach with text generation technologies?

The three of us editors—myself, Annette Vee, and Tim Laquintano—are researchers in writing and rhetoric who are interested in automated writing technologies. We saw an opening for creative ways of engaging with text generation technologies inclusive of large language models, but not necessarily restricted to them. My impetus in getting involved was to historicize the current moment, and that definitely came to bear in the collection.

The collection really evolved to advocate for a tempering of the hype surrounding large language models. Most of the assignments advocate for a critical and often playful kind of exploration of these technologies in the classroom. They don’t ask ChatGPT or other language models to write an essay for a student wholesale, but rather position the language model, for example, as a peer reviewer to give feedback and critically engage with both the writing process and the language model (see Antonio Byrd’s assignment, “Using LLMs as Peer Reviewers for Revising Essays”).

It seemed to me that many of the assignments are not so interested in making students create a finished, almost publishable piece of writing, but it’s more for students to reflect critically on how AI-developed writing differs from their own writing. Is that right?

Yeah, a lot of the assignments are somewhat comparative in nature. As writing instructors, my co-editors and I are trained in writing and rhetoric and are very invested in writing as an essential skill to be developed in a higher ed context. A lot of the assignments we chose demonstrate both the affordances and limitations of large language models as tools to be integrated in various ways in the writing process. A number of the assignments throughout the five sections use a comparative approach in most, if not all, stages of the writing process, from research and brainstorming and outlining to composing the actual piece to getting feedback on it and revising advice.

What would you say to people who are skeptical of AI’s value in the writing classroom?

I’m with you! I’m not approaching this technology with a wholesale endorsement. I want to acknowledge the real harms that some implementations of large language models have caused, notably in terms of labor. I don’t want to contribute to the hype with this collection. What I would encourage instructors to do when thinking about this technology is adopt a pragmatic approach, because the cat’s out of the bag here.

[Large language models] are something that students are thinking about. It’s something that many, many professionals in higher education are thinking about. And I think the first step in a classroom context is just to address it with your students. Something that has worked really well for me in my writing classrooms is setting course expectations, and in that conversation, what I’ve been doing in the last year or two is integrating conversations about [AI]. I ask my students, “If there are technologies that can write for us, why are we in a writing class?”

That conversation opens up a lot of the value of being in a writing intensive class—it’s a real investment in writing as a way of thinking, a way of developing the intellectual skills that are necessary to carry a person through their college career and beyond. Introducing that question early on creates buy-in for students as to why they’re there, and also gives you a place to set the ground rules for how these technologies are going to be used or not used in the classroom.

[Once you have that conversation,] then it becomes easier to create a policy for how large language models should and shouldn’t be used in your class. Setting clear expectations around AI and large language model use in the classroom is probably the most important first step for instructors to take right now. And then, if folks want to go beyond just creating a policy, that’s where the assignments in TextGenEd become useful.

There are ways to critically integrate the technology in thoughtful and thought-provoking ways that center their affordances for a writing process, give students a window into how these tools work, and increase their literacy with AI and large language models along the way.

What has the student response been like?

The ground rule-setting is really important to them. For students, [AI and large language models are] tools that have been historically associated with cheating and circumventing various intellectual and academic processes, so students have been excited to have sanctioned uses for them, rather than thinking of them as taboo. I integrate large language models in a personal essay writing assignment at the end of my first-year writing course, and many students have told me that’s the reason they’re taking my class.

How do you feel about the future of writing instruction with AI around? Do you feel worried? Optimistic? A bit of both?

I guess I’d say a bit of both. It’s been uplifting to see writing and rhetoric scholars have what it takes to meet this moment in a thoughtful and nuanced way, which is evident in the assignments that are in TextGenEd. One positive outcome of doing the collection for me personally is learning that we, as a group of educators and scholars, have what it takes to contend with these tools in a way that meets the learning goals of whatever class we’re teaching while preparing our students better to live in this new world [of AI and large language models].

I am looking forward to seeing more research come out on how writers are actually using large language models in their practice. This is a very nascent research field—and I’m part of research that is hoping to contribute to this area—because widespread access to higher quality large language models hasn’t been around for very long. So the research needs to catch up with how people are using it across creative, academic, and professional settings. I’m most curious and a little bit nervous about the latter, because how these technologies are used in the workforce will inform how educators need to approach them.

Provided by
Johns Hopkins University


Citation:
Q&A: Writing around an AI taboo—practical ways for teachers to incorporate AI into their classrooms (2024, March 7)
retrieved 8 March 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Science, Physics News, Science news, Technology News, Physics, Materials, Nanotech, Technology, Science
#Writing #taboopractical #ways #teachers #incorporate #classrooms

You may also like