it's a massive crime that decades into FP, we still don't have a type system that can infer or constrain the amount of copies and allocations a piece of code has. software would be massively better if it did - unnecessary copies and space leaks are some of the most performance-regressing bugs out there and there simply isn't a natural way of unearthing those.
There are ongoing projects like Granule[1] that are exploring more precise resource usage to be captured in types, in this case by way of graded effects. There is of course still a tension in exposing too much of the implementation details via intensional types. But it's definitely an ongoing avenue of research.
We do now though, with OxCaml! The local stack allocation mode puts in quite a strong constraint on the shape of the allocations that are possible.
On my TODO list next is to hook up the various O(x)Caml memory profiling tools: we have statmemprof which does statistical sampling, and then the runtime events buffer, and (hopefully) stack activity in OxCaml's case from the compiler.
This provides a pretty good automation loop for a performance optimising coding agent: it can choose between heap vs local, or copy vs reference, or fixed layout (for SIMD) vs fragmentation (for multicore NUMA) dependingo on the tasks at hand.
> infer or constrain the amount of copies and allocations a piece of code has
That's exactly what substructural logic/type systems allows you to do. Affine and linear types are one example of substructural type systems, but you can also go further in limiting moves, exchanges/swaps etc. which helps model scenarios where allocation and deallocation must be made explicit.
I recently realized that "pure functional" has two meanings, one is no side-effects (functional programmers, especially of languages like Haskell use it this way) and the other is that it doesn't have imperative fragments (the jump ISWIM to SASL dropped the non-functional parts inherited from ALGOL 60). A question seems to be whether you want to view sequencing as syntax sugar for lambda expressions or not?
While python isn't type safe, you can use Pylance or similar in combination with type hinting to get your editor to yell at you if you do something bad type-wise. I've had it turned on for a while in a large web project and it's been very helpful, and almost feels type-safe again
It just isn't good enough. Anytime Pyright gives up in type checking, which is often, it simply decays the type into one involving Any/"Unknown":
Without strict settings, it will let you pass this value as of any other type and introduce a bug.
But with strict settings, it will prevent you from recovering the actual type dynamically with type guards, because it flags the existence of the untyped expression itself, even if used in a sound way, which defeats the point of using a gradual checker.
Gradual type systems can and should keep the typed fragment sound, not just give up or (figuratively) panic.
> I've had it turned on for a while in a large web project and it's been very helpful, and almost feels type-safe again
In my experience "almost" is doing a lot of heavy lifting here. Typing in python certainly helps, but you can never quite trust it (or that the checker detects things correctly). And you can't trust that another developer didn't just write `dict` instead of `dict[int, string]` somewhere, which thus defaults to Any for both key and value. And that will type check (at least with mypy) and now you lost safety.
Using a statically typed language like C++ is way better, and moving to a language with an advanced type system like that of Rust is yet another massive improvement.
Yeah, if you're going to use static type checks, which you should, you really want to run the checker in strict mode to catch oversights such as generic container types without a qualifier.
Although I've found that much of the pain of static type checks in Python is really that a lot of popular modules expose incorrect type hints that need to be worked around, which really isn't a pleasant way to spend one's finite time on Earth.
Yes - high-performance Haskell code looks similar. There isn't much to be said there - it's a little less clean-looking because FP optimizes for the most useful scenario and trying to do highly advanced stuff like that will be more verbose. This is in contrast to OOP where everything is verbose, and sometimes high-perf stuff that falls into the shape of globals + mutation + goto looks very succinct.
I think there are more succinct snippets in here and some this more verbose exposition is for pedagogical purposes. I am not a fan of ocaml because tacking on the object syntax made SML more verbose (ugly imo). Looks like 0xcaml continued trend.
(author here) The mutation is only for performance critical code. I'm first trying to match C/Rust performance in my code, and then transform it to more idiomatic functional code (which flambda2 in OxCaml can optimise).
It's too difficult right now to directly jump to the functional version since I don't understand the flambda2 compiler well enough to predict whta optimisations will work! OxCaml is stabilising more this year so that should get easier in time.
Looks pretty ML:ish to me, even in a segment like this:
let parse_int64 (local_ buf) (sp : span) : int64# =
let mutable acc : int64# = #0L in
let mutable i = 0 in
let mutable valid = true in
while valid && i < I16.to_int sp.#len do
let c = Bytes.get buf (I16.to_int sp.#off + i) in
match c with
| '0' .. '9' ->
acc <- I64.add (I64.mul acc #10L) (I64.of_int (Char.code c - 48));
i <- i + 1
| _ -> valid <- false
done;
acc
[1] http://granule-project.github.io/granule.html
On my TODO list next is to hook up the various O(x)Caml memory profiling tools: we have statmemprof which does statistical sampling, and then the runtime events buffer, and (hopefully) stack activity in OxCaml's case from the compiler.
This provides a pretty good automation loop for a performance optimising coding agent: it can choose between heap vs local, or copy vs reference, or fixed layout (for SIMD) vs fragmentation (for multicore NUMA) dependingo on the tasks at hand.
Some references: - Statmemprof in OCaml : https://tarides.com/blog/2025-03-06-feature-parity-series-st... - "The saga of multicore OCaml" by Ron Minsky about how Jane Street viewed performance optimisation from the launch of OCaml 5.0 to where they are today with OxCaml https://www.youtube.com/watch?v=XGGSPpk1IB0
That's exactly what substructural logic/type systems allows you to do. Affine and linear types are one example of substructural type systems, but you can also go further in limiting moves, exchanges/swaps etc. which helps model scenarios where allocation and deallocation must be made explicit.
> I am also deeply sick and tired of maintaining large Python scripts recently, and crave the modularity and type safety of OCaml.
I can totally relate. Switching from Python to a purely functional language can feel like a rebirth.
Without strict settings, it will let you pass this value as of any other type and introduce a bug.
But with strict settings, it will prevent you from recovering the actual type dynamically with type guards, because it flags the existence of the untyped expression itself, even if used in a sound way, which defeats the point of using a gradual checker.
Gradual type systems can and should keep the typed fragment sound, not just give up or (figuratively) panic.
In my experience "almost" is doing a lot of heavy lifting here. Typing in python certainly helps, but you can never quite trust it (or that the checker detects things correctly). And you can't trust that another developer didn't just write `dict` instead of `dict[int, string]` somewhere, which thus defaults to Any for both key and value. And that will type check (at least with mypy) and now you lost safety.
Using a statically typed language like C++ is way better, and moving to a language with an advanced type system like that of Rust is yet another massive improvement.
Although I've found that much of the pain of static type checks in Python is really that a lot of popular modules expose incorrect type hints that need to be worked around, which really isn't a pleasant way to spend one's finite time on Earth.
When I learnt FP, the choice was between Lisp, Scheme, Miranda, Caml Light and Standard ML, depending on the assignment.
Nowadays some folks consider FP === Haskell.
It's too difficult right now to directly jump to the functional version since I don't understand the flambda2 compiler well enough to predict whta optimisations will work! OxCaml is stabilising more this year so that should get easier in time.