@ntnsndr This blurry line is inevitable since "AI" is not well-defined. I think it's necessary to sharpen our shared understanding of the purpose of writing (communicating substance to people) and what good writing entails (substance, not elegant sentence structure that dupes the reader into associating it with highly-regarding writings of the past). Passing generated content as human breaks a social contract similar to passing counterfeit money.
@ntnsndr Writing is sometimes an artifact meant to provide evidence of understanding. If you take a document and transform it with a thesaurus and grammatical isomorphisms to obfuscate the source, that is plagiarism. The victim is not just the person who wrote the original content, but also the reader who is being deceived into believing you understand and communicated your understanding. Generative models are automating that obfuscation and everything that comes out is tainted.
@ntnsndr Some say "you are accountable for everything you submit", but we've always made room for honest mistakes (and it would be very anxiety-provoking if human-authored content that made an arithmetic error when summarizing statistics resulted in an honor code violation, for example). But the generated content is never right for the right reasons; it can only be incidentally correct.
@inthehands Odds there are internal documents that would refute this claim? > “The goal is to make sure people see what they will find most meaningful – not to keep people glued to their smartphone for hours on end.”
How did it come to be that stable evaluation of log(1 + x), called log1p in C99 and logp1 in IEEE-754, are apparently still not available in #Fortran as of #Fortran2023?
@inthehands@celesteh@adamshostack@ct_bergstrom Absolutely, though the reason we can positively identify it is because they were so sloppy with basic screening. They'll cosmetically "fix" that and then we'll be even more awash with incoherent papers making false statements alongside people hurling false accusations, not meant to stick, but to cast doubt. I don't believe Elsevier is going to lead the charge to reform structural issues that make this outcome possible.
🚨 Are you a PhD student interested in software for #CFD and/or fluids data analytics? The NSF Fluid Dynamics Software Infrastructure (FDSI) team at CU Boulder has 10-week paid internships (with housing) for PhD students to conduct V&V for complex flows like the Common Research Model (CRM), wind turbine blades, and fundamentals like the "speed bump" geometry. This work will involve model/workflow intercomparison and validation using experiments.
PSA: Outlook apparently automatically generates image descriptions in outgoing email. A colleague sent out an email announcement about an upcoming talk on AI ethics, equitable NLP, and bias, along with a photo of the (woman) speaker. The description:
[A person with a beard Description automatically generated with medium confidence]
All the hype about souped up developer productivity using LLMs for coding reminds me of the original title of this 2014 paper, before it was milquetoasted in 2015 acceptance.
LLMs can help you rapidly acquire semi-plagiarized fragments of well-traveled code instead of using a quality library with vision of the problem domain. Might be great for KPIs, but this debt will come back to bite you, unless you're already gone. Will be painful for orgs to adapt. https://research.google/pubs/pub43146/