Maybe the most ethical way to do is not teaching an AI to be ethical at all.
Last week I read a passage about uncensored LLM, which people removed the built in alignment/ethical requirement from open sourced model weight like llama2, which makes it willing to tell you how to process nuclear thing and how to make a bomb.
Those uncensored models should not be served as a service, since it would be unmoral to offer advices to potential terrorist. But those model should exist for individuals who want to run localy on their hardware. In the passage the author argues that you can't assume your values are the only one that correct. If you teach your model with those values, some may like it and some may not. There should be an option. OpenAI think sexual content is not appropriate (I guess it's hard to regulate) but as far as I know, some people is more willing to pay for R18 LLMs than OpenAI's ethical GPT 4.
So, maybe you can make some kind of plugable ethical thing? Different people can have different flavors. Christians can have a cyber Jesus to tell them how important traditional family is, while Muslims can have a cyber Allah to tell them don't smoke and drinking.
After all AI can't do things smart for now. Even if AGI exists, you can try to rise it as your child, and it might accidentally become ethical, but it won't become perfect. We humans haven't figure out which part of our brain forms the "ethical" feeling, how can you teach it to a bunch of numbers?
----
To address the real world problem, I think the best way is to add markers. Build a robust marker that is hard to remove or change. Then you can solve most unethical use. (Normally fake photos or something)
Another one is to feed fake data. Don't just train your LLM on nuclear tutorials. Instead, feed some fake info so even if someone ask how to make a nuclear bomb, they just get fooled.