Understanding AI Writing Tools and their Uses for Teaching and Learning at UC Berkeley

What is ChatGPT?

ChatGPT is a tool powered by artificial intelligence that can output conversational and formal written text. ChatGPT is considered a type of “chatbot” tool; users who access this tool can type in a question or a prompt and the ChatGPT tool will respond with a clear and cogent written answer. 

Recent press coverage in The New York Times, Wired, The Atlantic, Nature, MIT Technology Review, and other major popular publications cover the wide range of concerns and possibilities with using ChatGPT particularly in academic contexts. Namely, many of who have tried using ChatGPT express concern that it can dupe readers into believing that the text composed was written by a human rather than by a machine. In fact, The New York Times generated a quiz where users can assess whether they can detect which writing samples were written by ChatGPT and which were written by human authors. 

There are several advantages and disadvantages to using ChatGPT for learning and, ultimately, it will be up to individual departments and faculty to decide how they best see the potential and pitfalls of using ChatGPT and other similar emerging tools in their courses. 

In the remainder of this page, we outline some of the opportunities and threats posed by ChatGPT and similar technologies and then go on to outline a range of pedagogical strategies instructors can use to harness the power of the AI to further their learning goals. 

Please note that ChatGPT is not a supported tool here at UC Berkeley. That means that UC Berkeley has not reviewed ChatGPT for concerns with accessibility, privacy, and security. As such, if instructors choose to use ChatGPT for their teaching, they assume responsibility for reviewing and vetting concerns with accessibility, privacy, and security. Instructors should remain open to giving students alternative options for completing an assignment if ChatGPT is inaccessible to them in any way. Please consider working with the Disabled Students' Program (DSP) for ideas on how to explore accessible alternatives as needed.

Numerous AI detection tools have emerged to address the use of ChatGPT, but, as with ChatGPT itself, none of these tools have been reviewed for accessibility, privacy, and security. Using these detection tools could also lead to FERPA, privacy, and copyright violations given that using these tools require faculty to input examples of student work into a third-party software. We do not recommend that instructors rely upon AI detection tools to identify usage of ChatGPT for writing and, instead, encourage faculty to engage in converastion with their students about appropriate (and inappropriate) usage of ChatGPT for their courses.

Technology like ChatGPT continues to evolve and it’s likely that advice about how, when, and when not to use ChatGPT will continue to shift in kind.  These are a few starting points that may help in conversations about student usage of this tool. Note that these ideas are intended to be educational and are not yet driven by any institutional policy.

This page will remain a work-in-progress and will be updated as use cases and engagement with ChatGPT technology continues to evolve. 

Additional contributors: Mary Ann Smart, for her contributions to the activity ideas featured on this page. Cathryn Carson, for her contributions to some of ChatGPT's threats.

Opportunities and Threats of Using ChatGPT for Teaching and Learning

Opportunities

Further Reading

ChatGPT provides a discussion point for faculty and students to interrogate the benefits and limitations of artificial intelligence in their learning. Understanding and seeing what’s possible – and what isn’t – will help students recognize how a tool like ChatGPT can help and hinder their ability to complete assignments.

“ChatGPT Both Is and Is Not Like a Calculator” (John Warner, Inside Higher Ed)

“Update Your Course Syllabus for ChatGPT” (Ryan Watkins)

ChatGPT can kick off classroom conversations about information literacy and where information comes from online. ChatGPT is powered by a particular data set that has clear limitations. Using ChatGPT to conduct simple information searches, like using other search engine tools, can help students see what kind of information they can and can’t find using AI.

ChatGPT Advice Academics Can Use Now (Inside Higher Ed)

ChatGPT can provide generative starting ideas for helping students pre-write or brainstorm ideas for responses to a prompt.

AI Will Augment, Not Replace(Marc Watkins, University of Mississippi) 

ChatGPT can give some writers a template for producing writing in a particular genre (for example, a 5-paragraph essay, a cover letter, or an interview template). The content of the writing may be inaccurate, but the form of the genre may be reflective of the expectations for the assignment or task.

Don’t Ban ChatGPT in Schools. Teach With It.(Kevin Roose, New York Times)

AI Could Be Great for College Essays” (Daniel Lametti, Slate)

Threats

Further
Reading

ChatGPT can produce full essays based on simple prompts, which may tempt some students into submitting AI-generated essays from ChatGPT as their own work. 

Here is an overview of some of the many concerns about academic honesty in written assessments: “AI and the Future of Undergraduate Writing” 

ChatGPT outputs text in an authoritative tone, which may lead some students to believe that all information from ChatGPT is accurate. However, its data set is limited and, as such, may present false data or misinformation. 

The New Chatbots Could Change the World. Can You Trust Them?” (The New York Times)


“If ChatGPT doesn’t get a better grasp of facts, nothing else matters” (Fast Company)

ChatGPT is a for-profit tool, actively gathering data from users who input information. While ChatGPT is free to use as of this page’s publication, it will eventually be monetized. It is unclear how the developers of ChatGPT will use the data that users input. By using ChatGPT, users consent to having potentially personal data stored and sold by OpenAI (the developers of ChatGPT). 

OpenAI Privacy Policy



There are ethical implications to engaging with ChatGPT's dataset, as its development depends on exploited human labor. Workers in the Global South were paid less than $2 per hour to read and label disturbing content, including graphic violent and sexual material, so that it could be removed from ChatGPT's output.

"OpenAI Used Kenyan Workers on Less than $2 Per Hour to Make ChatGPT Less Toxic" (Time)



There are economic and environmental ramifications to engagement with ChatGPT. Large Language Models (LLMs), such as ChatGPT, require tremendous computing power that only major tech companies have the funds to support. Running any technology with major computing processes required has an environmental impact, as computing processing facilities will need to be built with facilities that require even larger power and cooling resources. Training ChatGPT led to emissions of more than 550 tons of carbon dioxide equivalent.

"Tech Giants Rush to put chatbot to work" (Axios)

"The Generative AI Race Has a Dirty Secret" (Wired)