Happily, apart from the one bug previously noted, no other errors (other than very small typos) were detected in the paper. I found some mild simplifications to some steps in the proof as a consequence of formalization, but nothing particularly notable. (But this was a rather elementary paper, and I was not really expecting to glean deep mathematical insights from the formalization process.)
The Sapir-Whorf hypothesis https://en.wikipedia.org/wiki/Linguistic_relativity is controversial for natural languages, but for the specific case of proof formalization languages I think the effect is real. The Lean syntax and library of lemmas and tools certainly pushed me to plan ahead and abstract away many small lemmas to a far greater extent than I would ordinarily do; long "stream of consciousness" type arguments, which I tend to favor personally, are possible in Lean, but often not the most efficient way to proceed. I have also started noticing myself mentally converting results I read in regular mathematical literature to a sort of "Lean pseudocode", though this is probably just a version of the "Tetris effect" https://en.wikipedia.org/wiki/Tetris_effect and will likely dissipate now that my project has concluded.
A combination of automated tools, AI tools, and interactions with human Lean experts proved invaluable. GPT greatly sped up the initial learning of syntax, tools such as `exact?`, `apply?`, and Moogle https://moogle-morphlabs.vercel.app/ helped locate key lemmas, and Copilot had an often uncanny ability to autocomplete one or more lines of code at a time. But there was certainly a lot of "folklore" tips and tricks that I had to go to the Lean Zulip to get answers to. 2/3