Nice discussion here on #llm weaknesses with summarizing, something I rarely use them for but which I see as one popular use for academics. Argues they shorten rather than summarize with emphasis on volume rather than key points and we need to think of the influence of context (from the input) vs parameters (from the LLM training data).
This is also a nice setup for a student exercise: ask them to summarize an article themselves, test in various LLMs, and analyze.
https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/