Right now, on a planetary scale, personal data is being collected, financial information is being stolen, and devices are being used for DDoS attacks or cryptocurrency mining. This is a common problem. Will AI become a disaster for end-to-end encryption? No. The issues with end-to-end encryption lie in the realm of computational complexity, not in whether your data is being stolen by a regular keylogger or AI.
By the way, my avatar was created by AI, and this message was also translated by AI. Has this knowledge changed your attitude towards the drawing and the text?
I don't understand why to write here about certain things, but please don't explain it to me. I don't want to give a reason for the continuation of any ideological discussion. By the way, your message resembles an AI product — a lot of fluff, but little substance.
The issue of privacy is much deeper, and AI doesn't fundamentally change anything here. Even without AI, you can't be sure about the security of your phone. The software has grown to enormous sizes, and auditing tens of gigabytes is difficult. Users can install additional software, and there are many specialized processors in phones that can affect privacy.
Cybercriminals sell infected devices on marketplaces. Often, manufacturers themselves install firmware with malware on devices.
If the client's plaintext is sent to the AI before encryption, then there can be no talk of end-to-end encryption. This creates a similar threat to the presence of keyloggers and sniffers, making the client environment vulnerable.
Thus, the question of whether AI will pose a threat to end-to-end encryption is irrelevant, as end-to-end encryption does not address the issue of malware. Other protective measures must be employed to deal with this threat model.