Thoughts on this AI policy?
Full context here: https://nathanschneider.info/school/academic_honesty
Thank you! I've really been wrestling with this.
Thoughts on this AI policy?
Full context here: https://nathanschneider.info/school/academic_honesty
Thank you! I've really been wrestling with this.
@ntnsndr I used a similar policy in my grad RecSys class last fall. I didn’t get anyone submitting a prompt. I don’t think anybody used an LLM without documenting it but it is of course hard to know.
@rburke Yeah, it seems like it might be excessive. Like "tell me the search terms you used to find that wikipedia page"...
Yeah Grammarly is an issue. Students have gotten flagged for that a lot before.
Maybe it is time to just let it all fly.
Anyone yeeting assignments at ChatGPT and directly submitting the results is wasting their education. I think this page would benefit from a discussion about the POINT of writing assignments, explaining how your policy supports the students’ goals.
I like the idea of citing your tools. I’m not aware of standards for how to do that. Perhaps you could provide an example?
I think “if AI is used to generate any content that you turn in” is kinda ambiguous. If someone accepts a spell check recommendation, does that count? What about a simplification from Grammarly or Hemingway?
@ntnsndr seems like a thoughtful approach. Couple reactions:
This part is especially helpful:
> Writing is sometimes an artifact meant to provide evidence of understanding
In my classes, this is almost exclusively its purpose.
@ntnsndr Some say "you are accountable for everything you submit", but we've always made room for honest mistakes (and it would be very anxiety-provoking if human-authored content that made an arithmetic error when summarizing statistics resulted in an honor code violation, for example). But the generated content is never right for the right reasons; it can only be incidentally correct.
@ntnsndr Writing is sometimes an artifact meant to provide evidence of understanding. If you take a document and transform it with a thesaurus and grammatical isomorphisms to obfuscate the source, that is plagiarism. The victim is not just the person who wrote the original content, but also the reader who is being deceived into believing you understand and communicated your understanding. Generative models are automating that obfuscation and everything that comes out is tainted.
@ntnsndr This blurry line is inevitable since "AI" is not well-defined. I think it's necessary to sharpen our shared understanding of the purpose of writing (communicating substance to people) and what good writing entails (substance, not elegant sentence structure that dupes the reader into associating it with highly-regarding writings of the past). Passing generated content as human breaks a social contract similar to passing counterfeit money.
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.