The first ARM I spent quality time was on a StrongARM SA1110 dev board. Interesting stuff, but very mobile focused and the toolchain left much to be desired. People do forget that ARM had pretty stiff competition from MIPS and to a slightly lesser extent SuperH at the time. There was a lot of innovation and a fun time to be mucking around. SA-1110 ended up not being a fit for what we wanted to do, and in the end I did other things, but worthwhile experience.
Yes, provided it takes off like ARM does currently. On one hand the work being done on ARM right now is helping displace the x86-64 monoculture, on the other hand it might leave no niche for RISC-V to fill.
True, but this is something that could be relatively easy for Arm Ltd to address in an endless number of possible ways, considering their already-dominant position, the perceived seriousness of the threat, etc. Just like Microsoft basically gave up on making the average Joe pay directly for a Windows license.
Another question is, at what point do you expect RoI on your RISC-V expertise, even if it was basically guaranteed to take 20% of the global CPU market 10 years from now, you still need to ask yourself, what do you want to be doing until then.
I'm barely old enough to remember when VLIW was still being hyped, and while RISC-V so far has a much better outlook commercially, it would be wise to remember that this market is ruthlessly competitive. Even companies as old and dominant as Intel must remain vigilant, for the old friend next door may eat their lunch and their dog.
The last time I looked I tried to deploy a python MVP on ARM but found that some of the dependencies weren't compatible. I'd take a hard look at your ecosystem and what sort of dependencies you're likely to need in the future before committing to ARM.
This was my observation as well. Python and Node still don't have perfect support on ARM (And no, this was not 10 years ago. Couple months ago tops). Not the core language/runtime but libraries. They still rely on native libs for some stuff. The platform I haven't experienced any issues at all was the JVM. Worked perfectly all the time. At least for my use cases (Web API services).
Crypto and media codecs are in that area of incompatibilities. Occasionally run into Intel - AMD issues and even subtle generational mishaps every now and then.
In cloud this is often related to hypervisor hiding architecture details (or plainly lying about them). Another source of trouble is python ecosystem fighting os packages.
My recommendation would definitely include inspecting your dependencies. Also, be aware of python limitations: it's easy to develop, but but of a nightmare to deploy, maintain when targeting distinct platforms.
I use Rust across 6+ different target architectures - in webserver applications, in soft CPUs of my own design, constant no_std. I have the luxury of only using Rust in much of my work.
With that being said, Go is easier and more consistent for cross-compilation than Rust for web server applications. This is a result of the stdlib being so expansive and also 100% golang - down to the DNS resolver. Every non-CGO application will behave similarly. This is not true for Rust.
I disagree on the choice of Go and die a little inside whenever it is suggested as the prime language for back-end workloads.
To quote the Rob Pike himself:
"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."
The better languages for this at ARM64 hosts are C# and Kotlin. One has a lot of investment in high-performance primitives (including the full set of ARM64 SIMD intrinsics), another one has the power of JVM ecosystem. Both have really nice frameworks you don't need to massage to reach high throughput: ASP.NET Core and Vert.X/ActiveJ.
The language not being semantically "good" does not change the webserver cross-compilation story being easier and more consistent than every other platform. I wouldn't voluntarily choose to use Go, but this is absolutely one of its "killer features."
C# and JVM languages both require a substantial amount of tooling and runtime support to achieve this "cross-platform" status and it's still really not the same thing as being able to take a 100% Go source repository and releasing native Windows, OSX, and Linux binaries for both ARM64 and AMD64 machines with no additional tooling from any source machine with just a couple of CLI flags.
With Go, I don't have to install a runtime on the target machine nor do I have to hackily bundle one into my binary. I don't have to make sure the runtime version installed on my system is compatible with whatever I am doing. I don't have to paper over the lack of this functionality with Docker.
.NET and JVM - both put portability as their main feature since the very first version. The described difference does not exist in reality, at least not with .NET.
JVM tooling produces by-definition portable artifacts that require host runtime (easily accessible by just picking the right container image, which is the same as the most popular way to deploy Go workloads).
.NET tooling is identical in producing portable assemblies. Alternatively, you can compile it, like Rust, to a single executable file for a target architecture which either includes IL assemblies and JIT compiler or is a native AOT binary. GraalVM native image offers similar (but somewhat more limited) functionality for the Java world. This is identical to what final Rust binary ends up looking like once you include e.g. Tokio and Hyper/Reqwest or a Go one, which, by all means, includes a full runtime.
It is also just two flags: dotnet publish -p:PublishAot=true -r linux-arm64.
If anyone has one-liner for GraalVM counterpart - please post it.
Fundamentally, to "require host runtime" is not the same thing. There's zero reality where the combination of jars, classpath, system JVM version, system JDK version is the same in developer or end-user experience.
C# tooling that statically bundles a massive runtime or JIT compiler where most of the host functionality lives is also fundamentally different in internal implementation than what Go or Rust gives you.
GraalVM is a true native binary and is the closest thing mentioned here - the tooling still sucks compared to Rust, Go, or .NET.
The "massive runtime" is on average a 20-60MB executable binary (JIT) if you apply trimming (a single flag or build property). It is no different to Go which too includes async runtime with its own threadpool, type system facilities to do reflection and a GC.
If you do AOT publish instead (which is about 9MB size for a full web server + threadpool and more) - it will actually outperform Go at trimming unreferenced code once you start adding dependencies (hello 200MB binary sizes or worse in the Go world - it is almost as if there is no free lunch in programming).
Do not conveniently ignore these facts - NativeAOT produces true native binaries. If you look at them with Ghidra (with symbols not stripped) - they resemble something like C++ with garbage collector (write barrier calls) and a couple of interface dispatch calls.
(I still find it shortsighted and ironic because coming from worse ecosystems to Go naturally feels like a breath of fresh air yet results in developers being stuck in local maxima, ignoring far better options. And it also ends up being the discussion about the publishing model over language merits which Go has few.)
unrelated to your original argument and so i’ll treat it as diversionary and ignore. that said, it’s an important signal that you suffer a deficiency in clear, systematic thinking. i’m happy to re-engage when you rid your arguments of these ‘tin whiskers’ that are equally as damaging as actual tin whiskers.
At least for us, we made the switch in preparation for the release of the M1 Mac machines.
It took about a year for things to simplify - originally we had to use qemu for dependencies and had to hack and compile a few other bits along the way. We have more gnarly deps than your average web app though.
Now the only thing we’re running on x86 is Sentry and I’ll stop self hosting that when I get a spare moment because it’s more of a beast to run than I like.
More and more yes.Anything that you have the source of or is compiled for it. Limitations are mostly for numerical computations that may lack some instructions or not be optimised as well as x86. For standard web/micro services they are great.