@eaton from what I understand a lot of the black-box nature of these algorithms stems from basically not being able to do this. This is not to say it's not solvable, although possibly it requires rewriting how the training works. Likely it will require reverse engineering of obscure categorization being done by these models, and at a guess assigning newly made up terms for what they are doing. Possibly then bridging those terms into more readable ones. It's a fascinating question for sure.