Yes, we should have regulation that requires provenance and watermarking systems. (And it should ALWAYS be obvious when you've encountered synthetic text, images, voices, etc.)
Yes, there should be liability --- but that liability should clearly rest with people & corporations. "AI-caused harm" already makes it sound like there aren't *people* deciding to deploy these things.
>>