#AI #GenerativeAI #LLMs #Hype #AIPolicy #UK: “1. Large language models contain foundational flaws which mean they are unable to live up to the hype and make it likely that the current bubble will burst. They will continue to require vast amounts of invisibilised labour to produce, but will not result in any form of artificial general intelligence (AGI).
2. The greatest risk is that large language models act as a form of ‘shock doctrine’, where the sense of world-changing urgency that accompanies them is used to transform social systems without democratic debate.
3. The AI White Paper promotes populist narratives about AI adoption that align with the hype around large language models while offering a fairly thin evidence base. Ongoing developments in UK policy, such as the upcoming summit, cite notions of existential threat while ignoring the more mundane risks of social and environmental harms.
4. The narrative around open source AI is a complete red herring. The way ‘open’ can be applied to large language models doesn’t level the playing field, make the models more secure or challenge the centralisation of control.
5. UK regulators are not well placed to address the issues raised by large language models because these systems operate across sectors and technical, economic and social registers while establishing unpredictable feedback loops between them. Meanwhile the AI industry is already engaged in significant lobbying at the EU which has proven sufficient to dissolve regulatory red lines.
6. Additional options for regulation draw on frameworks like post-normal science to mandate an extended peer community and the inclusion of previously marginalised perspectives. This more grounded approach has a better chance of resulting in AI that is more socially productive, where regulators are supported by distributed and adaptive ‘councils on AI’”
https://committees.parliament.uk/writtenevidence/124038/pdf/