1. Go, when I first saw code I wrote almost a decade ago still compiles and runs in Go, I decided to use Go for everything. There were some initial troubles when I started using it a decade ago, but now it's painless.
2. Haskell, I use it for DSL and state machines.
3. Bash for all deployment scripts and everything.
4. TypeScript, well for the frontend.
Lately, I’ve been using Go and SQLite for nearly everything.
I don't think I’ve any motivation to look at any other language.
I gave up on Java, Python, Ruby, Rust, C++, and C# long ago.
Fun fact:
Same thing for cloud, I just don't use managed cloud services anymore. I only use VMs or dedicated servers. I've found when you want to run a service for decades+, you’ve got to run your own service if you want it not to cost a lot in the long run.
I manage a few MongoDB, PostgreSQL clusters. Most of the apps like email lists marketer (for marketing, sending thousands of email each day) are simple Go app + SQLite using less than 512MB RAM.
Same for SaaS billing, the solution is entirely written in Go and uses Postgres. (I didn’t feel safe here using SQLite for this for a multi-tenant setup.)
Our chat/ticketing system is SQLite + Go. Deployment is easy, just upload Go cross-compiled binary + systemd service file, alloy picks up log and drops it graphana which has all alerts there.
I don't need to worry about "speed" for anything I do in Go, unlike Ruby/Python.
When something has to be correct I define it model it in Haskell as its rich type system helps you write correct code. Though setup is not painless as Go, decent performance.
I write good documentation, deployment instructions right into mono repo. For a small team this is more than enough imho.
No Docker, no Kubernetes, just using simple scripts + graphana + prometheus + Loki and for alloy/nodeexporter. Life couldn't be any simpler than this.
I'm in the same boat, I started using go only a year ago, but don't want to really use anything else now for apps or data processing. I wrote an app that loaded a lot of data for reporting into duckdb. I've been doing so much java and JavaScript that I feel like it was just much simpler to deal with overall.
Shell for the scripts. I haven't tried to work through much DSL as I really am not a fan of DSLs. Maybe I'll give haskell a shot again to see if it sticks.
Java is a resource hog when you use patterns and libraries popular in Java land.
When you are working in the Java ecosystem, you just assume that this much resource is needed by the app! But when you'll code the same thing in Go using the same methods, you'll find resource usage is really very low.
We’ve a 1: 1 copy of the app; on JVM, it's using 2GB RAM using Spring Boot, and on Go, it runs on 512MB RAM and is blazingly fast.
ofc, it's possible to tune java app but why bother? when we get same low resource usage and better performance in Go from get go while still writing naive and dumb code?
Deployment is super simple in Go, upload a single cross compiled binary it's done. Very simple and easy.
Rust needs a lot more effort to write correct code than Go in my experience. We get the same performance out of Go, with much less effort. At some point, it's just cheaper to start one extra instance than perform some low-level optimisation; modern hardware is fast enough that Rust-level optimisation is rarely needed for what we do.
I've been recently trying to port my simple program to Mojo to find out how the language looks like and feel. And the comptime feature (which inspired by Zig I think) is absolute joy to use. It helps a lot that the syntax looks like Python also. Excited to see how the language will become in the future particularly for its memory safety paradigm.
io is not a monad. theres nothing stopping you from stashing a global io "object" and just passing the global wherever you interface with the stdlib.
It's dependency injection. and yes, you can model dependecies like a monad but most people, even in less pure fp langs, don't.
i don't really say this to just be a pedant, but if you're an fp enjoyer, you will be disappointed if you get the picture that zig is fp-like, outside of a few squint-and-it-looks-like things
My reading of the article, was that the author seems to be in search of a new paradigm, that moves beyond what he sees as the limitations of "fp-like" languages as they exist today. His point appears to be that Zig provides the benefits of "fp-like" languages that exist today, while avoiding at least some of the downsides.
And he does admit you may have to squint, to appreciate the fp capabilities provided by Zig.
No? I don't agree. The domain can be strongly modelled in the types; for instance, declaring kilometers, seconds, etc. instead of using primitive floats/reals everywhere, to statically prevent dimensional analysis issues.
> Well, I’ve been radicalized. I’ve learned enough performance-oriented programming to be dissatisfied with the common functional languages (Haskell, OCaml, Common Lisp/Clojure, Scheme) because each of these languages are predicated on the existence of garbage collection and heaps.
I would take another look at Common Lisp if I were the author. Manual memory management is very much an option where you need it.
i dont think its generally a good idea to be making complex type generators like this in zig. just write the type out.
the annoyingness of the thing you tried to do in zig is a feature. its a "don't do this, you will confuse the reader" signal. as for optional, its a pattern that is so common that it's worth having builtin optimizations, for example @sizeOf(*T) == @sizeOf(usize) but @sizeOf(?*T) != @sizeOf(?usize). if optional were a general sum type you wouldn't be able to make these optimizations easily without extra information
The point is that algebraic data types are common in functional languages. "Maybe" is just an example of an algebraic data type, there's tons more.
If the article says "functional programmers should take a look at Zig", and Zig makes algebraic data types hard, then maybe they shouldn't use it.
If you even say "the annoyingness is a feature, use zig the way it is intended to be used" then that's another signal for functional programmers that they won't be able to use zig the same way they use functional languages.
> if optional were a general sum type you wouldn't be able to make these optimizations easily without extra information
Rust has these optimizations (called "niche optimizations") for all sum types. If a type has any unused or invalid bit patterns, then those can be used for enum discriminants, e.g.:
- References cannot be null, so the zero value is a niche
- References must be aligned properly for the target type, so a reference to a type with alignment 4 has a niche in the bottom 2 bits
- bool only uses two values of the 256 in a byte, so the other 254 form a niche
There's limitations though, in that you still must be able to create and pass around pointers to values contained within enum, and so the representation of a type cannot change just because it's placed within an enum. So, for example, the following enum is one byte in size:
enum Foo {
A(bool),
B
}
Variant A uses the valid bool values 0 and 1, whereas variant B uses some other bit pattern (maybe 2).
But this enum must be two bytes in size:
enum Foo {
A(bool),
B(bool)
}
...because bool always has bit patterns 0 and 1, so it's not possible for an invalid value for A's fields to hold a valid value for B's fields.
Came to say this. Early in my career I really thought implementing Maybe in any language is necessary but not I know better. Use the idioms and don’t try to make every language something it’s not.
This looks like an example of a low level language vs a high level language (relatively speaking). The low level language makes a lot more of what is going on underneath explicit compared to the higher level language which abstracts that away for a common pattern. Presumably that explicitness allows for more control and/or flexibility. So apples to oranges?
Low-level doesn’t mean more information, it means more explicit.
In Zig, that means being able to use the language itself to express type level computations. Instead of Rust’s an angle brackets and trait constraints and derive syntax. Or C++ templates.
Sure, it won’t beat a language with sugar for the exact thing you’re doing, but the whole point is that you’re a layer below the sugar and can do more.
Option<T> is trivial. But Tuple<N>? Parameterizing a struct by layout, AoS vs SoA? Compile time state machines? Parser generators? Serialization? These are likely where Zig would shine compared to the others.
> I see http_client as existing in a Reader monad that contains an allocator and an IO interface. This is exactly how the IO monad (and for that matter IO#) works in Haskell. The fact that the Zig people came up with this independantly speaks not just to the universal nature of monads (and the algebraic structures of programming languages)
Honestly this sounds like monad bullshit. That's a struct/class/ADT/whatever you want to call it, they existed since forever. The only idea Zig had was that maybe we shouldn't make them global instances.
Anyone preferring functional programming will be extremely disappointed with Zig. And I'm saying this as a big user of Zig. It's a language for imperative code. And Io is not a monad, just a bunch of virtual methods doing the actual I/O.
It’s possible (even true in my opinion) that garbage collected functional languages and low level languages like Zig are both great, and serve different purposes.
I actually ship stuff in Haskell believe it or not. I also think Zig is very cool and have played around with it quite a bit. Yes, garbage collection hurts performance, but the reality is that the overwhelming majority of all software does not suffer from the performance loss between well written code in a reasonably performant functional gc language and a highly performant language with manual memory management. It’s just not important. But not having to deal with the cognitive overhead of managing memory and being able to deal in domain specific abstractions only is a massive win for developer productivity and code base simplicity and correctness.
I think OxCamls approach of opting in to more direct control of performance is interesting. I also think it’s great that many functional patterns are making their way into imperative first languages. Language selection is always about trades offs for your specific use case. My team writes Haskell instead of Rust because Haskell is plenty fast for our use case and we don’t have to write lifetime annotations everywhere and think about borrowing. If we needed more performance we would have no choice but to explore other languages and sacrifice some developer experience and productivity, that’s very reasonable. I’m also not saying performance doesn’t matter (if you’re writing for loops in Python, stop). But this read to me like “because better performance exits with manual memory management, all garbage collectors are bad, so I’ll force zig to be something it’s not in order to gain performance I probably don’t need”. Which to me is an odd take. A more measured way of thinking about this might be, it can be useful to leverage functional patterns where appropriate in low level languages, if you find yourself needing to write code in one.
I think the lisp situation is peculiar, for 3 main reasons:
- most of them are dynamically typed (thus don't need sum types, as there are no types). The ones that do have gradual type systems likely either implement some form of them (off the top of my head I can only remember typed racket, and I think it implements them through union types)
- not all lisps lean functional: I believe that's mostly a prerogative of scheme and clojure (and their descendants); something like CL is a lot more procedural, iirc
- in most lisps, thanks to macros, you probably don't need the language to support some sort of match construct out of the box: just implement it as a macro [1]
In general the "proper sum types" side of functional programming is just the statically typed one, but even in dynamically typed FP languages you end up adopting sum type-esque patterns, like elixir's error handling (which closely resembles the usual Either/Result type, just built out of tuples and atoms rather than a predefined type), and I assume many lisps adopt similar patterns as well
Isn't the whole point of abstraction to not care about whats underneath unless you really have to? But ideally, you don't because the abstraction is "good enough"?
I haven't heard anyone writing code in Elixir complain about performance issues.
I believe EqPoint allows you to pass around a bag of functions (aka an interface, which Zig does not have as a concept) to functions which can be written in terms of "I need these functions" rather than in terms of a concrete type.
1. Go, when I first saw code I wrote almost a decade ago still compiles and runs in Go, I decided to use Go for everything. There were some initial troubles when I started using it a decade ago, but now it's painless.
2. Haskell, I use it for DSL and state machines.
3. Bash for all deployment scripts and everything.
4. TypeScript, well for the frontend.
Lately, I’ve been using Go and SQLite for nearly everything.
I don't think I’ve any motivation to look at any other language.
I gave up on Java, Python, Ruby, Rust, C++, and C# long ago.
Fun fact:
Same thing for cloud, I just don't use managed cloud services anymore. I only use VMs or dedicated servers. I've found when you want to run a service for decades+, you’ve got to run your own service if you want it not to cost a lot in the long run.
I manage a few MongoDB, PostgreSQL clusters. Most of the apps like email lists marketer (for marketing, sending thousands of email each day) are simple Go app + SQLite using less than 512MB RAM.
Same for SaaS billing, the solution is entirely written in Go and uses Postgres. (I didn’t feel safe here using SQLite for this for a multi-tenant setup.)
Our chat/ticketing system is SQLite + Go. Deployment is easy, just upload Go cross-compiled binary + systemd service file, alloy picks up log and drops it graphana which has all alerts there.
I don't need to worry about "speed" for anything I do in Go, unlike Ruby/Python.
When something has to be correct I define it model it in Haskell as its rich type system helps you write correct code. Though setup is not painless as Go, decent performance.
I write good documentation, deployment instructions right into mono repo. For a small team this is more than enough imho.
No Docker, no Kubernetes, just using simple scripts + graphana + prometheus + Loki and for alloy/nodeexporter. Life couldn't be any simpler than this.
Shell for the scripts. I haven't tried to work through much DSL as I really am not a fan of DSLs. Maybe I'll give haskell a shot again to see if it sticks.
We’ve a 1: 1 copy of the app; on JVM, it's using 2GB RAM using Spring Boot, and on Go, it runs on 512MB RAM and is blazingly fast.
ofc, it's possible to tune java app but why bother? when we get same low resource usage and better performance in Go from get go while still writing naive and dumb code?
Deployment is super simple in Go, upload a single cross compiled binary it's done. Very simple and easy.
Rust needs a lot more effort to write correct code than Go in my experience. We get the same performance out of Go, with much less effort. At some point, it's just cheaper to start one extra instance than perform some low-level optimisation; modern hardware is fast enough that Rust-level optimisation is rarely needed for what we do.
It's dependency injection. and yes, you can model dependecies like a monad but most people, even in less pure fp langs, don't.
i don't really say this to just be a pedant, but if you're an fp enjoyer, you will be disappointed if you get the picture that zig is fp-like, outside of a few squint-and-it-looks-like things
And he does admit you may have to squint, to appreciate the fp capabilities provided by Zig.
here. i am not the only one that refers to it as dependency injection:
https://daily.dev/blog/zig-async-io-io-uring-zig-0-16-rethin...
"Zig 0.16 introduces std.Io, a flexible I/O abstraction that uses dependency injection, similar to the Allocator interface"
> ...
> What facilities does the language provide me to create correct-by-construction systems and how easily can I program the type-system.
Isn't programming the type-system orthogonal to the program's domain in the same way that manual memory management is?
I would take another look at Common Lisp if I were the author. Manual memory management is very much an option where you need it.
In addition to the normal value to value, type to type, and type to value functions, in comptime, you can write static value to type functions.
In full dependent type, you can in addition write dynamic value to type functions, completing the value to type corner.
So in terms of typing strength, plain Haskell < Zig < dependent type languages.
the annoyingness of the thing you tried to do in zig is a feature. its a "don't do this, you will confuse the reader" signal. as for optional, its a pattern that is so common that it's worth having builtin optimizations, for example @sizeOf(*T) == @sizeOf(usize) but @sizeOf(?*T) != @sizeOf(?usize). if optional were a general sum type you wouldn't be able to make these optimizations easily without extra information
If the article says "functional programmers should take a look at Zig", and Zig makes algebraic data types hard, then maybe they shouldn't use it.
If you even say "the annoyingness is a feature, use zig the way it is intended to be used" then that's another signal for functional programmers that they won't be able to use zig the same way they use functional languages.
Rust has these optimizations (called "niche optimizations") for all sum types. If a type has any unused or invalid bit patterns, then those can be used for enum discriminants, e.g.:
- References cannot be null, so the zero value is a niche
- References must be aligned properly for the target type, so a reference to a type with alignment 4 has a niche in the bottom 2 bits
- bool only uses two values of the 256 in a byte, so the other 254 form a niche
There's limitations though, in that you still must be able to create and pass around pointers to values contained within enum, and so the representation of a type cannot change just because it's placed within an enum. So, for example, the following enum is one byte in size:
Variant A uses the valid bool values 0 and 1, whereas variant B uses some other bit pattern (maybe 2).But this enum must be two bytes in size:
...because bool always has bit patterns 0 and 1, so it's not possible for an invalid value for A's fields to hold a valid value for B's fields.In Rust, which is arguably also a low level language, it looks like this:
In Zig, that means being able to use the language itself to express type level computations. Instead of Rust’s an angle brackets and trait constraints and derive syntax. Or C++ templates.
Sure, it won’t beat a language with sugar for the exact thing you’re doing, but the whole point is that you’re a layer below the sugar and can do more.
Option<T> is trivial. But Tuple<N>? Parameterizing a struct by layout, AoS vs SoA? Compile time state machines? Parser generators? Serialization? These are likely where Zig would shine compared to the others.
Honestly this sounds like monad bullshit. That's a struct/class/ADT/whatever you want to call it, they existed since forever. The only idea Zig had was that maybe we shouldn't make them global instances.
I actually ship stuff in Haskell believe it or not. I also think Zig is very cool and have played around with it quite a bit. Yes, garbage collection hurts performance, but the reality is that the overwhelming majority of all software does not suffer from the performance loss between well written code in a reasonably performant functional gc language and a highly performant language with manual memory management. It’s just not important. But not having to deal with the cognitive overhead of managing memory and being able to deal in domain specific abstractions only is a massive win for developer productivity and code base simplicity and correctness.
I think OxCamls approach of opting in to more direct control of performance is interesting. I also think it’s great that many functional patterns are making their way into imperative first languages. Language selection is always about trades offs for your specific use case. My team writes Haskell instead of Rust because Haskell is plenty fast for our use case and we don’t have to write lifetime annotations everywhere and think about borrowing. If we needed more performance we would have no choice but to explore other languages and sacrifice some developer experience and productivity, that’s very reasonable. I’m also not saying performance doesn’t matter (if you’re writing for loops in Python, stop). But this read to me like “because better performance exits with manual memory management, all garbage collectors are bad, so I’ll force zig to be something it’s not in order to gain performance I probably don’t need”. Which to me is an odd take. A more measured way of thinking about this might be, it can be useful to leverage functional patterns where appropriate in low level languages, if you find yourself needing to write code in one.
can you elaborate? theres only what 11 datatypes in elixir?
[a: 1, b: 2] == [{:a, 1}, {:b, 2}]
Or maybe atom vs string keys in maps?
%{a: 1} vs %{"b" => 1}
Or keyword lists always needing to come last in lists?
[some: :value, :another] # error
[:another, some: :value] # valid
Or maybe something else entirely. Those are just things I remember having to lookup repeatedly when I was first learning elixir.
In which case, what's the term for the "proper sum types and pattern matching" flavour of things?
- most of them are dynamically typed (thus don't need sum types, as there are no types). The ones that do have gradual type systems likely either implement some form of them (off the top of my head I can only remember typed racket, and I think it implements them through union types)
- not all lisps lean functional: I believe that's mostly a prerogative of scheme and clojure (and their descendants); something like CL is a lot more procedural, iirc
- in most lisps, thanks to macros, you probably don't need the language to support some sort of match construct out of the box: just implement it as a macro [1]
In general the "proper sum types" side of functional programming is just the statically typed one, but even in dynamically typed FP languages you end up adopting sum type-esque patterns, like elixir's error handling (which closely resembles the usual Either/Result type, just built out of tuples and atoms rather than a predefined type), and I assume many lisps adopt similar patterns as well
[1] https://github.com/clojure/core.match
I haven't heard anyone writing code in Elixir complain about performance issues.
btw we do sometimes bitch about performance :)
Why write:
EqPoint.eql(a, c)
When you can write:
Point.eql(a, c)