I remember some time ago, before I learned about type theory and constructive mathematics, a friend asked me why we don't just make a language that encodes programming using mathematics, i.e. encoding functions and numbers using set theory (functions being sets of tuples, numbers using the von neumann encoding etc.)
Back then my answer was that it wouldn't be feasible in terms of memory consumption and computation time (which, yes, if you try to use set theory and implement functions as hash sets or something it is) and that we instead have to use informal arguments to apply mathematics to our code, for example arguing that the properties of natural numbers hold for a given integer type in a given range and therefore an algorithm is correct under some preconditions.
But this is not true, and it's sad that the mathematics that is taught (especially to computer scientists) at university is so far removed from computability. We just need to use a different mathematical foundation, replacing set theory and first-order logic by type theory (which serves as both the deductive system and a way to construct objects).
This is both beneficial to maths (computable proofs, computer verification of formal proofs, possibility to include axiom of choice / law of excluded middle etc. as additional axioms which make the proof non-computable, making reliance on previous work explicit by including a library) and programming (ability to verify algorithms directly using arbitrarily complex mathematical statements and reasoning, without employing an informal argument anywhere - more safety).
Philosophically, I also believe the computers are a measure of reality. Logic should have some connection to reality to be useful. Computers happen to be a real, physical thing, and by subjecting logic to the rules of computers, we subject logic to the rules of reality.
Practically, proving that something exists without being able to actually name it (which, non-constructive logic often does) should be different from proving it by naming it, because "we have found this" is different from "we know that we can be able to find it". In the latter case, from a practical perspective, the work is far from done. There are real-world mathematical proofs with non-computable results where no way to compute the results has been found yet[1]. This is still a valuable result (and in constructive logic it can be encoded as the double negation of a statement, which does NOT imply the truth statement itself - it can be read as "the thing is not a contradiction"), but it should be a different result than "here is the actual thing", and the logical framework should discriminate between these two statements.
implications can encode all the other operators (¬a is a → 0, a ∨ b is (a → c) → (b → c) → c for any c and a ∧ b is (a → b → c) → c for any c).
for a computer, implications are functions, and functions are atomic, they are just computation, and that's what computers do. so, by working like this, you make your proofs executable by a computer. this also means you can extract concrete witnesses from proofs of existence.
it also gives you an easier syntax to work with, because now you can write your formal proofs using a (typed) lambda calculus (like coq) as syntax instead of the abomination that sequence calculus is.
Instead, what classical logic has to do is encode an implication a → b as ¬a ∨ b.
working with this "desugared" version actually requires the law of the excluded middle, which is not constructive/computable. trying to prove the desugared version in coq requires introducing an additional axiom (as seen in the attachment), which then prevents your proof from being executed as code.
with recursive functions, you can even get runnable inductive proofs - as long as the recursive function is proven to terminate.