GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Embed Notice

HTML Code

Corresponding Notice

  1. Embed this notice
    Christine Lemmer-Webber (cwebber@social.coop)'s status on Sunday, 16-Feb-2025 22:10:42 JSTChristine Lemmer-WebberChristine Lemmer-Webber
    in reply to

    Study after study also shows that AI assistants erode the development of critical thinking skills and knowledge *retention*. People, finding information isn't the biggest missing skillset in our population, it's CRITICAL THINKING, so this is fucked up

    https://www.microsoft.com/en-us/research/uploads/prod/2025/01/lee_2025_ai_critical_thinking_survey.pdf
    https://slejournal.springeropen.com/articles/10.1186/s40561-024-00316-7
    https://resources.uplevelteam.com/gen-ai-for-coding
    https://www.techrepublic.com/article/ai-generated-code-outages/
    https://arxiv.org/abs/2211.03622
    https://pmc.ncbi.nlm.nih.gov/articles/PMC11128619/

    In conversationabout 4 months ago from social.cooppermalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: static-content.springer.com
      The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review - Smart Learning Environments
      from Li, Lily D.
      The growing integration of artificial intelligence (AI) dialogue systems within educational and research settings highlights the importance of learning aids. Despite examination of the ethical concerns associated with these technologies, there is a noticeable gap in investigations on how these ethical issues of AI contribute to students’ over-reliance on AI dialogue systems, and how such over-reliance affects students’ cognitive abilities. Overreliance on AI occurs when users accept AI-generated recommendations without question, leading to errors in task performance in the context of decision-making. This typically arises when individuals struggle to assess the reliability of AI or how much trust to place in its suggestions. This systematic review investigates how students’ over-reliance on AI dialogue systems, particularly those embedded with generative models for academic research and learning, affects their critical cognitive capabilities including decision-making, critical thinking, and analytical reasoning. By using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, our systematic review evaluated a body of literature addressing the contributing factors and effects of such over-reliance within educational and research contexts. The comprehensive literature review spanned 14 articles retrieved from four distinguished databases: ProQuest, IEEE Xplore, ScienceDirect, and Web of Science. Our findings indicate that over-reliance stemming from ethical issues of AI impacts cognitive abilities, as individuals increasingly favor fast and optimal solutions over slow ones constrained by practicality. This tendency explains why users prefer efficient cognitive shortcuts, or heuristics, even amidst the ethical issues presented by AI technologies.
    2. Domain not in remote thumbnail source whitelist: resources.uplevelteam.com
      Gen AI for Coding Research Report | Uplevel Data Labs
      Can gen AI for coding improve developer productivity? Here's what the real-life research from a study of 800 developers reveals.
    3. Domain not in remote thumbnail source whitelist: assets.techrepublic.com
      AI-Generated Code is Causing Outages and Security Issues in Businesses
      from Fiona Jackson
      Businesses using artificial intelligence to generate code are experiencing downtime and security issues, according to Sonar CEO.
    4. Domain not in remote thumbnail source whitelist: arxiv.org
      Do Users Write More Insecure Code with AI Assistants?
      We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.
    5. Domain not in remote thumbnail source whitelist: cdn.ncbi.nlm.nih.gov
      A systematic literature review on the impact of AI models on the security of code generation
      Artificial Intelligence (AI) is increasingly used as a helper to develop computing programs. While it can boost software development and improve coding proficiency, this practice offers no guarantee of security. On the contrary, recent research ...
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.