> friends don’t just bring up type inference in casual conversation
I wonder if this is a reference to "I need you to understand that people don't have conversations where they randomly recommend operating systems to one another"
But to the actual point of the article: my understanding is that there are areas where you can use bidirectional typing (e.g. languages that have subclasses) where HM style type inference might become undecidable.
I once studied proof theory for a summer at a school in Paris and we talked about type inference and theorem proving all the time in casual conversation, over beers, in the park. It was glorious.
Being a student is so much fun, and we often waste it, or at least don't value it as much as we ought. 20 years later I'd love to go back.
> Being a student is so much fun, and we often waste it, or at least don't value it as much as we ought. 20 years later I'd love to go back.
An aside, but some years ago I watched the demo 1995 by Kewlers and mfx[1][2] for the first time and had a visceral reaction precisely due to that, thinking back to my teen years tinkering on my dad's computer, trying to figure out 3D rendering and other effects inspired by demos like Second Reality[3] or Dope[4].
I seldom become emotional but that 1995 demo really brought me back. It was a struggle, but the hours of carefree work brought the joy of figuring things out and getting it to work.
These days it's seldom I can immerse myself for hours upon hours in some pet project. So I just look things up on the internet. It just doesn't feel the same...
The subject does sometimes come up in my casual conversations, since Robin Milner was my first CS lecturer.
He never actually spoke about type inference in my presence. He did teach me CCS (pi-calculus predecessor) a couple of years later, by which time I could appreciate him.
> I wonder if this is a reference to "I need you to understand that people don't have conversations where they randomly recommend operating systems to one another"
It is!
> my understanding is that there are areas where you can use bidirectional typing (e.g. languages that have subclasses) where HM style type inference might become undecidable
There are! Afaik most languages end up with a bidirectional system in practice for this reason. Haskell started out HM and has shifted to bidir because it interacts better with impredicative types (and visible type applications). Bidir can handle fancy features like subtyping and all sorts of nifty stuff.
> What folks should actually be asking is “Does my language need generics?”.
You should also ask “Does my language need subtyping such as subclasses?” And if the answer to both is yes, you should probably forget about Hindley Milner, or at least pick something far away from it on the spectrum.
Structural subtyping yes, nominal subtyping is a bit pricklier.
As a developer I personally prefer structural subtyping, but structural subtyping is harder for a compiler to optimize runtime performance for.
Nominal sub-type hierarchies allows for members to be laid out linearly and member accesses becomes just an offset whereas a structural system always has the "diamond problem" to solve (it's hidden from users so not a "problem" but will still haunt compiler/runtime developers).
Now the kicker though, in practice nominal subtype polymorphism has other issues for performance on _modern_ computers since they create variable sized objects and cannot be packed linearly like monomorphic structures.
In the 90s when languages settled on nominal typing memory speeds weren't really an huge issue, but today we know that we should rather compose data to achieve data-polymorphic effects and singular types should be directed to packing.
Thus, most performance benefits of a nominal type system over a structural don't help much in real-life code and maintenance wise we would probably have been better off using structural types (iirc Go went there and interfaces in Java/C# achieves mostly the same effect in practice).
I've been implementing row polymorphism in my fork of Elm in order to support proper sum and substraction operations on unions, and it's far from trivial.
Example usecase: an Effect may fail with 2 types, but actually you have handled one/catched one so you want to remove it.
Elm-like HM systems handle fine, as you say it, row polymorphism mostly over records.
I'm not an expert in all of this, started studying this recently, so take my words with a grain of salt.
Eh, generics kinda do introduce a subtyping relation already. It's just that HM's Gen rule of e: σ implies e: ∀α.σ is restrictive enough that this subtyping relation can be checked (and inferred) by using just unification which is quite an amazing result.
I have my own programming language (pretty advance), but I don't even know what these two typing approaches are. Is it problematic? Or I just have one of these two without knowing that?
I don't think it's more practical, being able to do things like type inference on return value is actually really cool. Maybe more practical for the programming language developer (less learning about type systems) than for the user.. but then you have to ask why build another language?
I wonder if this is a reference to "I need you to understand that people don't have conversations where they randomly recommend operating systems to one another"
But to the actual point of the article: my understanding is that there are areas where you can use bidirectional typing (e.g. languages that have subclasses) where HM style type inference might become undecidable.
Being a student is so much fun, and we often waste it, or at least don't value it as much as we ought. 20 years later I'd love to go back.
An aside, but some years ago I watched the demo 1995 by Kewlers and mfx[1][2] for the first time and had a visceral reaction precisely due to that, thinking back to my teen years tinkering on my dad's computer, trying to figure out 3D rendering and other effects inspired by demos like Second Reality[3] or Dope[4].
I seldom become emotional but that 1995 demo really brought me back. It was a struggle, but the hours of carefree work brought the joy of figuring things out and getting it to work.
These days it's seldom I can immerse myself for hours upon hours in some pet project. So I just look things up on the internet. It just doesn't feel the same...
[1]: https://www.youtube.com/watch?v=weGYilwd1YI
[2]: https://www.pouet.net/prod.php?which=25783
[3]: https://www.pouet.net/prod.php?which=63
[4]: https://www.pouet.net/prod.php?which=37
He never actually spoke about type inference in my presence. He did teach me CCS (pi-calculus predecessor) a couple of years later, by which time I could appreciate him.
It is!
> my understanding is that there are areas where you can use bidirectional typing (e.g. languages that have subclasses) where HM style type inference might become undecidable
There are! Afaik most languages end up with a bidirectional system in practice for this reason. Haskell started out HM and has shifted to bidir because it interacts better with impredicative types (and visible type applications). Bidir can handle fancy features like subtyping and all sorts of nifty stuff.
You should also ask “Does my language need subtyping such as subclasses?” And if the answer to both is yes, you should probably forget about Hindley Milner, or at least pick something far away from it on the spectrum.
https://github.com/LPTK/simple-sub
https://www.reddit.com/r/ProgrammingLanguages/comments/hpi54...
https://dl.acm.org/doi/10.1145/3409006
https://en.wikipedia.org/wiki/Row_polymorphism
As a developer I personally prefer structural subtyping, but structural subtyping is harder for a compiler to optimize runtime performance for.
Nominal sub-type hierarchies allows for members to be laid out linearly and member accesses becomes just an offset whereas a structural system always has the "diamond problem" to solve (it's hidden from users so not a "problem" but will still haunt compiler/runtime developers).
Now the kicker though, in practice nominal subtype polymorphism has other issues for performance on _modern_ computers since they create variable sized objects and cannot be packed linearly like monomorphic structures.
In the 90s when languages settled on nominal typing memory speeds weren't really an huge issue, but today we know that we should rather compose data to achieve data-polymorphic effects and singular types should be directed to packing.
Thus, most performance benefits of a nominal type system over a structural don't help much in real-life code and maintenance wise we would probably have been better off using structural types (iirc Go went there and interfaces in Java/C# achieves mostly the same effect in practice).
Example usecase: an Effect may fail with 2 types, but actually you have handled one/catched one so you want to remove it.
Elm-like HM systems handle fine, as you say it, row polymorphism mostly over records.
I'm not an expert in all of this, started studying this recently, so take my words with a grain of salt.
..because that is more practical.