Reduce the amount of unification diagnostics #49
Labels
No labels
blocked
blocker
bug
c
codegen
dependencies
discussion
documentation
duplicate
enhancement
good first issue
help wanted
in progress
invalid
javascript
lexer/parser
llvm
macroexpander
performance
priority:high
priority:low
priority:normal
question
triage
typechecker
webassembly
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: samvv/bolt#49
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Problem
Consider the following program:
Right now, this program correctly produces two error diagnostics. One for
bar 1
and one forfoo 1
. Both inform thatInt
andBool
do not match.The two diagnostics didn't have to be. If we first had unified
k
withInt
and only then inferredbar True
andbar 1
, we would find thatbar True
was wrong andbar 1
correctly relatedk
with theInt
type, asfoo 1
does. This means one less diagnostic by just reversing some steps.More generally, the problem is how we can unify constraints in such a way that the produced diagnostics are of a minimal amount.
Solution: Tracking Unified Types
For finding the minimal amount of diagnostics, one solution would be to keep track of all types that unified with each other instead of setting it to the first one. If a type unified with
Int
andBool
then an error is printed and both are added to a set. When another constraint requires this same type to beBool
, no diagnostic message is generated because it is already in the set. If, however, a new typeString
unifies with this type thenString
is added to the set and a diagnostic message is again generated.We explicitly store the types in a set and not a list because the order of the types must not matter. In the above example, unifying
k
first withInt
or first withBool
should not affect the outcome of the inference.This method ensures that only really unexpected type combinations are reported, while types that are somewhat expected (because the source indicates that they are used like that) are ignored. At the same time, enough type errors are reported such that the user has to fix all (hidden) typing errors in the program before (s)he can continue.