GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Conversation

Notices

  1. Embed this notice
    Christine Lemmer-Webber (cwebber@social.coop)'s status on Sunday, 16-Feb-2025 22:03:35 JST Christine Lemmer-Webber Christine Lemmer-Webber

    I continue to be negative about generative AI assistants insofar as every time someone has written a piece of any length "co-written by an AI" it has all sorts of errors I point out the author hadn't even noticed were there

    Most irritatingly is that MULTIPLE TIMES people have written drafts about my work that simply injected and made up history. If they had just published them, that history would be cyclically referenced in the future as if fact

    That kind of thing just happens all the time now

    In conversation about 4 months ago from social.coop permalink
    • Fish of Rage likes this.
    • Børge and Pudão :antifa: repeated this.
    • Embed this notice
      Christine Lemmer-Webber (cwebber@social.coop)'s status on Sunday, 16-Feb-2025 22:10:42 JST Christine Lemmer-Webber Christine Lemmer-Webber
      in reply to

      Study after study also shows that AI assistants erode the development of critical thinking skills and knowledge *retention*. People, finding information isn't the biggest missing skillset in our population, it's CRITICAL THINKING, so this is fucked up

      https://www.microsoft.com/en-us/research/uploads/prod/2025/01/lee_2025_ai_critical_thinking_survey.pdf
      https://slejournal.springeropen.com/articles/10.1186/s40561-024-00316-7
      https://resources.uplevelteam.com/gen-ai-for-coding
      https://www.techrepublic.com/article/ai-generated-code-outages/
      https://arxiv.org/abs/2211.03622
      https://pmc.ncbi.nlm.nih.gov/articles/PMC11128619/

      In conversation about 4 months ago permalink

      Attachments

      1. Domain not in remote thumbnail source whitelist: static-content.springer.com
        The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review - Smart Learning Environments
        from Li, Lily D.
        The growing integration of artificial intelligence (AI) dialogue systems within educational and research settings highlights the importance of learning aids. Despite examination of the ethical concerns associated with these technologies, there is a noticeable gap in investigations on how these ethical issues of AI contribute to students’ over-reliance on AI dialogue systems, and how such over-reliance affects students’ cognitive abilities. Overreliance on AI occurs when users accept AI-generated recommendations without question, leading to errors in task performance in the context of decision-making. This typically arises when individuals struggle to assess the reliability of AI or how much trust to place in its suggestions. This systematic review investigates how students’ over-reliance on AI dialogue systems, particularly those embedded with generative models for academic research and learning, affects their critical cognitive capabilities including decision-making, critical thinking, and analytical reasoning. By using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, our systematic review evaluated a body of literature addressing the contributing factors and effects of such over-reliance within educational and research contexts. The comprehensive literature review spanned 14 articles retrieved from four distinguished databases: ProQuest, IEEE Xplore, ScienceDirect, and Web of Science. Our findings indicate that over-reliance stemming from ethical issues of AI impacts cognitive abilities, as individuals increasingly favor fast and optimal solutions over slow ones constrained by practicality. This tendency explains why users prefer efficient cognitive shortcuts, or heuristics, even amidst the ethical issues presented by AI technologies.
      2. Domain not in remote thumbnail source whitelist: resources.uplevelteam.com
        Gen AI for Coding Research Report | Uplevel Data Labs
        Can gen AI for coding improve developer productivity? Here's what the real-life research from a study of 800 developers reveals.
      3. Domain not in remote thumbnail source whitelist: assets.techrepublic.com
        AI-Generated Code is Causing Outages and Security Issues in Businesses
        from Fiona Jackson
        Businesses using artificial intelligence to generate code are experiencing downtime and security issues, according to Sonar CEO.
      4. Domain not in remote thumbnail source whitelist: arxiv.org
        Do Users Write More Insecure Code with AI Assistants?
        We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.
      5. Domain not in remote thumbnail source whitelist: cdn.ncbi.nlm.nih.gov
        A systematic literature review on the impact of AI models on the security of code generation
        Artificial Intelligence (AI) is increasingly used as a helper to develop computing programs. While it can boost software development and improve coding proficiency, this practice offers no guarantee of security. On the contrary, recent research ...
    • Embed this notice
      Hieronymous Smash (heironymous@chaos.social)'s status on Sunday, 16-Feb-2025 23:10:38 JST Hieronymous Smash Hieronymous Smash
      in reply to

      @cwebber Same. Cue me showing my boss that it takes me MORE time to generate the thing they want with an assistant *and then make it correct* than it would take me to do it myself...then reminding them again every time the salespeople come back...

      In conversation about 4 months ago permalink
    • Embed this notice
      Indieterminacy (indieterminacy@social.coop)'s status on Tuesday, 18-Feb-2025 01:32:28 JST Indieterminacy Indieterminacy
      in reply to

      @cwebber as somebody who actively takes coding risks to the point where I have to be mindful of things breaking and coding being not easily understandable by outsiders, it seems perverse seeing a (growing?) body of people accepting increased chaos and brittleness of processes for instant convenience. En masse, it looks like a dangerous moral hazard experiment. A shame ML hasnt been explored by more prior to AI. Its adoption is akin to caffeine, amphetamine or opiates as an industrial strategy.

      In conversation about 4 months ago permalink
    • Embed this notice
      Job (vanderzwan@vis.social)'s status on Tuesday, 18-Feb-2025 21:17:39 JST Job Job
      in reply to

      @cwebber "and harder to spot too"

      This is a thing that I feel is underestimated by so many people: these tools are trained to generate output that is as *convincing* as possible, regardless of whether the output is *correct*.

      In conversation about 4 months ago permalink

Feeds

  • Activity Streams
  • RSS 2.0
  • Atom
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.