Java might be the most successful programming language. It is a solid choice among many different fields. It is used on huge infrastructure projects (Apache Foundation), governments, big tech companies such as amazon, ibm, google, apple for many large scale services. It can do web, ml, GUIs, it's still strong among academics. On top of that it offers a great programming experience with excellent IDE support and it's ecosystem is huge. Even if you don't like the language itself , you can choose between Kotlin,Scala,Clojure that run on the JVM.
I think that Java is seriously being downplayed by HN crowd.
Ironically, while Java was the original "write once, run anywhere" language, it never succeeded in that regard (e.g. browser applets were never popular). Ironically, I believe Javascript has.
I was pretty much exclusively a Java programmer for the first decade and a half of my career, before moving to Node and TypeScript. I don't think I could ever go back at the point. Most importantly, this is my first time where the entire code base (front end and back end) is in the same language and toolchain, and I think it is the single most important thing I've seen in years for improved team productivity. The ease with which engineers can go between front end and backend is an incredible boon that shouldn't be underestimated.
I use typescript a lot. It's way better than JS but the type erasure problem is far worse than Java. Essentially all types are erased, so bugs where typings don't match what you expect and everything blows up are common. This isn't possible in Java because it's statically typed at runtime.
JS also uses several times more memory, is slower, and has a terrible (non existing) threading model. Yes you can run multiple instance of node or whatever, but sharing objects between them requires message passing which is orders of magnitude slower.
Until JS has a good threading model I'm never using it for backend. It's too expensive to use a bunch of single core machines to make up for it.
All of our devs use Typescript and Java daily for front and backend, the only overhead is making sure objects were passing around match on both ends. The only advantage to using the same language for everything is hiring inexperienced devs that don't know both IMO
> bugs where typings don't match what you expect and everything blows up are common
> the only overhead is making sure objects were passing around match on both ends
Seem like is it a big deal based on the first sentence.
I've never been convinced of the single language argument. Sharing code between frontend and backend sounds good but as in practice there's little overlap... models have subtle differences, there's extra logic server side... All in all it's not very practical.
To me the most awesome part of a fully TS project is that you can use the same interfaces everywhere. If you take the time to define them for any input / output, everything is pretty much guaranteed to be sound.
> the type erasure problem is far worse than Java
Check out RunTypes [1], amazing to guard any incoming data.
This library looks great! For cross language comms you can also use gRPC to avoid writing objects for both sides. We're not using it company wide but so far it works as advertised, minus the annoyance of having to use gRPC proxy to web endpoints
> has a terrible (non existing) threading model. Yes you can run multiple instance of node or whatever, but sharing objects between them requires message passing
> Ironically, while Java was the original "write once, run anywhere" language, it never succeeded in that regard (e.g. browser applets were never popular). Ironically, I believe Javascript has.
It completely succeeded in that!
Java (well JVM) developers today can
- Write code on any of Windows/Linux/macOS
- Deploy that code on any of Windows/Linux/macOS
Not a lot of language/platforms can claim to this amount of success, let alone with such an amazing set of tools and ecosystem.
C# (largely inspired by Java) run on even more platforms, because there is compiler/runtimes for mobile platform. It’s also the second biggest "entreprise" language, which fix a lot of Java pain points.
Actually C# runs on less platforms than Java, because it doesn't run on embedded devices, M2M, bluray players, military deployments, mainframes, Xerox and Ricoh copiers, SIM and chip based credit cards, and plenty of others devices, like even having a mobile OS based on an Java dialect.
I'm not saying Java code isn't very portable, and there are some notable exceptions (the IntelliJ/JetBrains products always are first to come to mind) of successful cross platform Java client apps.
But much of the original late 90s hype about Java was its cross platform nature, especially in the browser, but Java applets and other in-browser Java technologies were never popular (consider GMail and other GWT apps were written in Java but compiled down to Javascript).
Java owes 90%+ of its ubiquity and longevity to its success on the server.
They were pretty popular actually. The problem was that browser makers started trying to kill applets off very early. Netscape supported them, but Microsoft / Sun had a huge falling out and they ended up pushing ActiveX very hard as a replacement. The dominance of IE assured that ActiveX replaced Java applets for a while, and then MS fell out of love with ActiveX too, so HTML+JS was all that was left.
Applets were very much a victim of various power struggles within the browser industry, combined with Sun's general lack of competence on the desktop - for instance, their online upgrade engines have always sucked. Though in fairness, nobody got that right until Chrome.
i wonder if it was because at the time, the different DOM api in each browser was so immature, that to unify it into a single api is too big a task. The applet+blackbox region for rendering is the easiest MVP. Of course, with hindsight, that turned out to be a piece of crap.
> Java was the original "write once, run anywhere" language
That crown properly belongs to the UCSD P-System, which was the Java of the 1980's. It was the same idea as Java - compilation to a bytecode which an interpreter ran. It failed because the interpreter performance penalty was too high.
Java also started out as an interpreter, which made it too slow. Steve Russell of Symantec invented a JIT for it, and like the lumbering Allison-engined P-51 getting a supercharged Merlin, it brought Java to life.
I also wonder how much the difficulty of sharing files between different systems due to different disk formats played a role in its failure.
You could run p-system on a lot of machines - Apple II, IBM PC, TI-99/4A, PDP11... but how would you (and why would you) distribute your code across machines with such different storage media?
Maybe it was more of a problem of not being able to think of a use case for running a pCode program meant for a 512K PDP11 on a 48K Apple II+. A bit ahead of its time.
Though it’s technically _not_ Java, Kotlin has been a breath of fresh air for us in the last year. We tried it on a few smaller projects and it took off like wildfire throughout our team.
I write a lot of JavaScript (typescript included) and I have found it to be a very nice combination of things I like from both Java/JavaScript.
Admittedly I have been primarily writing Java for the last 15 years and consider JavaScript a second language for me, you may find Kotlin gives you a good reason to come back to the JVM (where it makes sense to of course).
It succeeded. Early 2000's I converted my company's "document server" from _heavily_ ifdef'd C to Java. It ran on Windows, Linux, AIX, Solaris, HP-UX, and Linux on IBM mainframes. Having EBCDIC as the default encoding for strings really made you pay attention to encoding in and out of Unicode.
"Write Once,Run Anywhere" (WORA) is not the same thing as using the same language on the frontend and backend. Java has quite good success with WORA as explained by hota_mazi in the sibling comment. Aplets failing to catch on does not take anything from WORA success story of Java.
This is practically a trope at this point. Sure, most of us who started off with Java 1.5-1.6 felt the same and Java was pretty much taken seriously only in the enterprise world, but in the last decade so much has happened in the JVM world.
In retrospect, as an early adopter of languages like Scala/Groovy, I really like how Java just waited and watched for a few years to see what was good in those languages and let them make mistakes on the way to building something stable and then adopted a lot of things that made those languages fun.
Java since 11.x onwards had been a great mix of developer productivity, stable core (other than people writing trivial projects, most people want something that lasts for years without random bugs), portability, and great tooling (specially from the IntelliJ side, as well as from the debugging side).
I'd much more openly recommend Java as a loved language now than back in 2010 (though Elixir is the new and shiny project I'm playing around with right now ;)).
It’s really hard to make intuitive guesses like that because we’re all so siloed by the kinds of programming work we do. I did technical screening for about a year (interviewed ~400 people in that time). I thought javascript would be the most popular language (because most of the programmers I interact with write JS more than anything else).
But I was wrong - the most popular language by far for the programming test was Python.
This depends. End of 90s beginning of 2000s I did an awful lot of Java as I was doing work for Telcos and they have wanted nothing else. Then I changed type clients and suddenly Java had disappeared from my horizon completely (well I did one more single nice contract somewhere around 2005 I think).
For my own products I've never used anything but native (well except browser programming which was Javascript).
In my opinion, Java as a web language is good when you don't need to really understand what's happening (like in low-traffic situations). If concurrency isn't an issue, then the massive amount of libraries can help you a lot.
In high-traffic environments, that ignorance punishes you. I've always felt Java and the JVM are of the mindset that you need a Ph.D. to even understand how it works or how to configure it, and if you can't get it, then you're just a bad programmer.
You need to know if you're blocking threads, if there's memory contention, and if libraries you pull in are using the forkjoin common pool (which you're likely using as a default threadpool). And when something blows up, finding the reason (even for any of the above issues) is really tough. You can use flight recorder, heap dumps and gc logs all day, but good luck navigating it all unless you're a genius. I've seen too many devs end up shrugging and hoping the issues are transient.
Even figuring out proper threadpool usage isn't straightforward. Look at the number of concurrency abstractions just to model concurrency in your system: https://www.youtube.com/watch?v=yhguOt863nw. It's ridiculous.
Lots of large tech companies "seem" to "make it work." But if my experience is at all similar, they're just relying on a handful of Ph.D.'s to hold the hands of the rest of the company when it comes to troubleshooting.
Part of the reason I fell in love with Elixir/Erlang and the BEAM is that it provides a simple (actor) concurrency model (with a single concurrency primitive, a process) and guardrails (time-slice scheduling) to prevent libraries from shooting you in the foot. OTP's observer makes finding bottlenecks a breeze.
For the web, taming concurrency feels way more important than any cpu-crunching perf gains the JVM can give you. I'm too stupid for the JVM; I'll stick to tools that take away numerous categories of complexity and get me closer to mastering my system.
>In my opinion, Java as a web language is good when you don't need to really understand what's happening (like in low-traffic situations)
Huh? Java is used in huge traffic backends, including HFT with minimal latencies acceptable all the way to Google and Twitter scale.
If anything, Java is much more fast and low level than the typical languages used for huge high-traffic services -- Rails, Python, etc - never mind about what's used in "low-traffic situations".
OP isn't comparing it to Rails, Python, etc. He's comparing it to Erlang.
ANd, to that, I'd agree. I've built high traffic stuff in Java. We built it, load tested it, it was terrible. After multiple rounds of profiling, tweaking GC settings, tweaking threadpool sizes, rewriting things to be async, finding out that a client library wasn't reusing connections properly, etc, we finally had acceptable performance...that still was less than I'd gotten out of the box in similar, IO bound services, written as unoptimized Erlang.
> You need to know if you're blocking threads, if there's memory contention, and if libraries you pull in are using the forkjoin common pool (which you're likely using as a default threadpool). And when something blows up, finding the reason (even for any of the above issues) is really tough. You can use flight recorder, heap dumps and gc logs all day, but good luck navigating it all unless you're a genius.
I've seen the same troubles with alternatives, just without the amazing tools, featuresome standard library or widely-accepted conventions.
Erlang is amazing and is places concurrency in a more central position. I'm hopeful Project Loom will greatly diminish the gap while carrying legacy code forward unchanged.
In defence of Java, I read somewhere it's 25 years old ;-)
Part of the reason for its success has been its strong commitment to backward compatibility, so it's to be expected that it might accumulate many ways of doing things. Python wisdom tells us this is often a Bad Thing. [0]
I imagine Java's approach to concurrency and parallelism might be quite different if it were designed today.
I imagine Java's approach to concurrency and parallelism might be quite different if it were designed today.
Probably not, actually. Project Loom's initial goal was to rethink concurrency on the JVM from scratch. What they came up with was:
* Make threads really, really cheap
* Make thread locals work better (as scoped locals)
* Add a few Executor utilities to help you control sub-tasks better (structured concurrency)
It turns out that Java concurrency is pretty damn good already. It provides all the different paradigms you might want to explore, is efficient and well specified. Meanwhile they realised that many of the alternative approaches to concurrency are in reality trying to work around the high cost of kernel threads. When you make threads really cheap, a lot of the motivation for other approaches falls away and the existing set of tools in the JDK come to the fore.
Your characterization of Loom is, I think, pretty accurate.
There are, however, a few things in Java's early concurrency support that make various things harder, including Loom, and we're having to put some extra effort into grappling with them.
Probably the most obvious is the fact that the language and VM requires every object to have a monitor lock that can be synchronized and waited/notified. In 1996 this was viewed as "Ooooh, sophisticated, building locking and concurrency support into the platform!" In recent years this has started to get in the way. Really only a very few objects are used as locks, but the _potential_ for every object to be locked is paid by the JVM.
It also intrudes on Project Valhalla, which is trying to define "identity-less" inline types (formerly, "value types"). Ideally, we'd want all conventional objects and inline objects to be descendants of java.lang.Object. But Object has the locking APIs defined on it, and locking is intertwined with object identity. So, does Object have identity or not? There are some solutions, but they're kind of weird and special-cased.
Another issue is that the locks defined by the language/VM ("synchronized") are implemented differently from locks implemented by the library (in java.util.concurrent.locks). Loom supports virtual threads taking library-based locks, in that when a virtual thread blocks on a lock it will be dismounted from the real thread. This can't be done with language/VM locks, so there's an effort underway to migrate the those locks to delegate to library code for their implementation. This isn't an insurmountable problem, but it's yet more work to be done, and it's a consequence of some of the original designs of Java 1.0's concurrency model.
That's true but if Java had been designed with a "synchronizable" keyword applied to classes, I wouldn't consider that a radically different language. The prevalence of unnecessarily lockable things is unfortunate from a JVM implementors perspective, slightly convenient from a user's perspective, but ultimately not a defining feature of the language or platform even if it may have seemed important in 1995.
When I think about Java concurrency today I tend to think of java.util.concurrent or the JMM. Perhaps that's odd.
Isn't Android mostly based on Java 7 (for instance, the Guava artifact you need for Android is the Java 7 one)? It can't be the reason why Java 8 is popular.
I believe that the reason why Java 8 is so popular is because there were a lot of backward compatibility problems with Java 9, compounded by the fact that Java 11 (the next LTS after Java 8; both Java 9 and Java 10 had very a very short life) removed many APIs deprecated by Java 9.
Yeah, Android Java has turned into Google's own version of J++.
Worse is that Kotlin fanboys don't get it, that without access to modern Java their Java FFI is worthless, as all Java 8+ libraries on Maven Central will slowly become useless on Android no matter what.
Additionally the language cannot expose JVM capabilities, unless they had even another backend.
So it will be stuff like value types, JNI replacement, proper generics, customized JIT and SIMD on the JVM, and plain old Java 8 on ART.
What does Maven Central have to do with the development of Java by Oracle?
That is precisely my point.
Android has completely unshackled itself from Java development. Between its reliance on Open JDK and Kotlin, it literally has zero dependencies on Java.
25 years of libraries to choose from slowly not available on Android.
If the Android team plans to rewrite all of them in Kotlin, be my guest.
Maybe they will manage before Fuchsia goes live and Flutter wipes the floor, and then everyone will be doing Dart anyway.
Have you noticed how shitty are all the languages designed at Google?
Thankfully someone that was there since Java 1.0 days bought its rights.
GraalVM would have been killed at birth.
I am also looking forward to the complete Android development environment to be running on top of Kotlin/Native, otherwise it will be so funny having to port Studio and everything else that depends on the JVM to modern versions, while Android itself is frozen into a Kotlin ecosystem + Java 8 subset.
> 25 years of libraries to choose from slowly not available on Android.
> If the Android team plans to rewrite all of them in Kotlin, be my guest.
What are you talking about?
Android developers can use Maven Central like any other Java developers without care about what JDK these dependencies were compiled with nor even whether they were written in Kotlin (most did not, obviously).
> I am also looking forward to the complete Android development environment to be running on top of Kotlin/Native, otherwise it will be so funny having to port Studio and everything else that depends on the JVM to modern versions, while Android itself is frozen into a Kotlin ecosystem + Java 8 subset.
Again, what are you talking about? Android development happily upgrades to the latest version of Kotlin without any trouble. Porting Studio? What? Do you even understand anything about any of these matters?
My point is simply that Android development today has zero dependencies on Java but you seem to have a thick chip on your shoulder and determined to spew toxic bile at Java and its ecosystem, while feeling some vague hate at Google in general.
I have zero interest in this debate, have fun tilting at these windmills.
They definitely can not use them, when they they make use of JVM features or JDK libraries delivered post Java 8.
Stating otherwise just proves that you don't know Java.
Android Studio and the complete Android toolchain runs on top of a JVM implementation, as the JVM moves forward, JetBrains will be forced to update InteliJ to take advantage of newer JVM versions, which will force Google to update all their Android development environment.
Just for kicks they are already being forced to do this,
Again, another proof of total lack of knowledge regarding Android
Toxic bille at Java?!?
Quite the contrary, I love Java since 1996 that is my third pillar alongside .NET and C++, what I completely hate is that Google played a Microsoft move with their flavor of Android Java (aka Google's J++), helped Sun going bankrupt withering them the revenue stream from Java deployments on Android, didn't bother to rescue Sun hoping that it would close doors without a hiss, now with its Android Java forces Java developers to create special versions of their libraries tailored to Android, and has a bunch of Silicon Valley fanboys supporting their damaging actions to the Java eco-system.
More like shitty Java. I'm glad Java is being relegated to a second class language that is not recommended for Android development. Kotlin is the now the recommended language for Android development. I give Google 2-3 years before they deprecate Java from Android.
That's not really fair. The point of the Erlang language was its novel and opinionated approach to concurrency. Java wasn't trying to be like Erlang, it was trying to lure programmers by having significant similarities to C/C++.
There are a lot of points which I simply cannot agree with. 1) concurrency as an issue - you make it sound as if doing concurrent programming in Java is hard. Its not if you read through the documentation. In todays world of spinning up small servers, when done right, it scales to massive levels. 2) Blocking threads, memory contention etc. - i think you may be comparing against the likes of single thread programming models like Vert.x or Node.js etc. The flip side which in my opinion is more severe is that if you get even a small thing wrong on the other side, it blows up even more which is more difficult to isolate and debug. Plus the need to learn a whole new programming paradigm which is not easy to wrap your head around. 3) Lot of tech companies seem to make it work - while I agree that corporates will pretty much make everything seem to work, it requires a few engineers who understand software design principles to create design and abstractions that work well. You imply that only PhDs understand that kind of stuff whereas I am sayin that there are many other who understand and who are genuinely making things work in the Java world. I can tell you, when concurrency done right in Java, it is so performance that many will simply not believe. Just my thoughts and happy to differ.
I believe there are few runtimes out there that have such a good monitoring and profiling tools for free and out of the box as Java. Java Flight Recorder, remote debugging and monitoring can be really useful for rare production performance issues.
It's funny you should mention that in response to a post talking about Erlang. :P Because, yes, there are few. The BEAM happens to be one that definitely beats the JVM on that front.
So monitoring and profiling are pretty similar in what sort of things they tell you, but the meaningfulness tends to feel higher due to the programming model. Here's a fairly recent blog post giving an example from someone trying the BEAM for the first time, coming from the JVM - https://medium.com/@mrjoelkemp/jvm-struggles-and-the-beam-4d...
But when it comes to remote debugging, and more specifically, a general "I want to understand what is happening in production", the ability to attach a REPL, alongside your tools, is amazing. I can insert a breakpoint, sure (if I for some reason built my production instance with debug info), but just as easily (without any debug info compiled in!), and more usefully, I can query actor state, mailboxes, etc, fire a message to a process to see what happens, etc...all the things you'd get with a REPL running locally in your dev environment, basically. Do stuff like query for internal state for a process, then call a function with it to see what happens to the data, all in isolation from the normal execution flow (since immutable data gives you a degree of safety to actually run that live code, with copies of the live data, and see what happens). I can even remotely load new code, if I want, effectively allowing me to deploy a hotfix without taking the node down. And I can do all of this in prod. All of this is, of course, super dangerous, but with great power etc etc.
I hardly know anything about those things, and write java backends powering one of the most used services in my country. So I disagree with it being that complicated.
If you write the service stateless it's incredible what you can achieve with a couple small instances of a default spring boot container.
I work on a lot of legacy systems in financial services, the biggest issues related to threads is variable scope regarding servlet design patterns. I just migrated a 18 year old app off Weblogic to TomEE and we had issues with struts tags. The main app I work on has around 350 concurrent users. I have JMX on all the time and monitor it for long running threads and tune it when needed. The problem I have with Java is the massive amount of libraries that can be used and developers who copy and paste code and don't think about what the code does. The verbosity kills me too, it's so over done. I've decided to learn Go just because I'm tired of reading Java code, especially legacy code. But yes you are correct there is a lot of settings in the JVM and a lot to read to understand it. It's a powerful language with a lot of features.
> If concurrency isn't an issue, then the massive amount of libraries can help you a lot.
Can you point to another language that has anything remotely comparable to `java.util.concurrent`? Also, Java is getting green threads by means of Project Loom.
> In my opinion, Java as a web language is good when you don't need to really understand what's happening (like in low-traffic situations). If concurrency isn't an issue, then the massive amount of libraries can help you a lot.
Not sure how to interpret this comment.. If high concurrency and high performance matter, that is precisely where Java shines. The only other reasonable option would be C++ but it brings so much pain with it that Java is the way to go.
If traffic is low and performance doesn't matter (which is most sites), then sure, use whatever favorite scripting language.
Java is a language that protects your investment. If you write code for it today, it will probably run and be easily deployable in the future by default.
The same cannot be said for Python and JavaScript, for example. At least not by default.
Not sure if being "deployable" is an issue, considering docker exists. So, anything is deployable in the future by default, given a machine running the correct docker image.
Java already runs on the new ARM machines ;) Someone has OpenJDK building for the server variant, and it works quite well already: https://github.com/gonzalolarralde/jdk/tree/gonzalolarralde/.... Apple had the C0 interpreter up on the day of the announcement because they needed it for various Xcode tools.
I must admit I am completely ignorant to how much of an issue this is in real life, but I'm also in the privileged position of having the backend code compiled and run on remote servers, rarely on my own machine. On the other hand, I'm a junior developer, so I might yet stumble on this problem's relevancy at some point.
To be fair, docker is already a pain on my machine (using Fedora 32). I gave up on using docker at some point.
It's very widely used, that's for sure. But it's increasingly not a good choice - in the examples you give - web, ML, GUIs, it is not first class in any of those (and not near to it in the latter two).
What's next for Java should be relatively little change; let a language like Kotlin without all the baggage be the way ahead on the JVM. There's a remarkably good compatibility story there; way better than basically any other language ecosystem out there, that's the real legacy of Java.
I like to think of Java as 'a reasonable choice for almost any problem'. Almost never the best one. If you're looking for a single language to standardize on for all the things corporate-wide, it makes sense. But don't lie to yourself thinking you've picked the best tool for any job.
I think it's the JVM that's the real win for Java. At my work we have a mix of Java, Kotlin and Scala and they all Just Work in our deployment pipeline that was built with just Java in mind.
I think it's the libraries. They're like Barbie - they have everything. You need, say, to store affine transforms in your SQL database? Java can bridge those very different worlds. (It literally has affine transforms in the library.)
> I think it's the libraries. [...] they have everything.
On the other hand, they need to have everything. In many other languages, it's common to just use a library written in a different language. For some reason, the foreign function interface of Java seems to have been designed to be hard to use, so instead of using an already existing library, Java developers tend to go through the route of "Rewrite It In Java".
This is the nature of technology and economics though. Widely used and mature technology will almost by definition be behind the state of the art. Progress is always happening and by definition a new language cannot achieve mass adoption over night.
Java is big because it has been around a long time and was decent when it came out. Java was the Go of its generation. Nothing radically new but wrapped up in a way people liked and was familiar with in large part due to the success of C/C++ prior.
Whatever achieves mass adoption after Java will also be behind the times by the time that happens, and as geeks we will have moved on to whatever is newer and cooler.
I am pretty neutral towards Java as a language. My biggest issue is with the software culture of over-engineering and complicating things. Java guys seems very dogmatic about how to design software.
Not only that, but Java's reflection features opened the door for a huge, vibrant modding community that is arguably one of the largest among any PC game.
Very early in Minecraft's development, people were already decompiling/modifying/injecting their own mods, and a lot of frameworks (Bukkit, Spigot, etc.) emerged to provide a common API for modding.
The large modding community arguably had a very positive impact on Minecraft's early success -- Although I don't have any quantitative metrics to reinforce that point, I fondly remember early Minecraft as having a relatively technical community that tinkered with the game as a sandbox for countless custom experiences.
Java is still my default language after so many years (even though through Clojure). I think Java's biggest advantage is the JVM though, if it had an ML programming language on the top without null and proper interfacing to Java packages I would never look at anything else. Just like .NET has F#.
Great question, I think it is a good alternative to Java even though it is trying to do too much. Not sure it is happening because it is trying to build on the top of Java or some other reasons. Scala is still good and productive language though!
No the commenter you're replying to, but I've actually looked into Scala a few times and while it has some big criticisms I don't necessarily disagree with, there we're many things I actually liked about the language. I briefly considered picking up for a side project, but ultimately went with something else that provided more relevant features and abstractions.
Sadly, in the real world, it seems that Scala is mostly relegated to the Spark world.
Java is pretty solid, but not percieved as cutting edge anymore, even though they are still improving it. I think Oracle has done a better job then I originally expected. I think the ecosystem is by far it's biggest advantage. It's a integration target for many projects just because it has such wide adoption.
excerpt
Nothing competes with Java. Nothing. Because Java wasn't about destroying the competition; Java was about creating a reality that otherwise did not and could not exist. It was about imaging the "what could have been", and then creating that.
The main problem with Java is its concurrency model, which gives incentive to the creation of threads that fight for resources and introduce bugs. This seemed a wise choice in the 90s, but as concurrency has increased several-fold in the last 25 years, the model cannot scale to real software needs. It is the Java equivalent to pointers in C.
Some will do for sure - but it might not even be "most". There is a suprising amount of thread-per-request code out there. And actually it's doing fairly well - even at the scale of the companies you listed if you are using it in the right place.
Right place here means don't use it for a frontend proxy which needs to manage 100k keepalive connections.
But for a service which needs to handle 500 concurrent requests at maximum and doesn't have to deal with TLS anymore it will be fine. And there's enough of those services out there.
A lot of the Java code in bigger companies is also written based on older frameworks like earlier versions of Servlet and J2EE. Those programs will also not make any use of async mechnanisms and prefer a simple programming model instead.
You're misinterpreting what I said. I talked about the "main problem with Java", and with C/C++. Saying that a tool has a problem doesn't mean it is useless, far from it.
Amusingly, Loom, which will likely appear in the Java 17 time-frame will allow you to use all that blocking code and it automatically transforms it into non-blocking code. Going to make coding high-concurrency applications a lot easier. Check out the few lines of code needed to convert Jetty from blocking to non-blocking:
I'm very skeptical of this conversion into loom. Jetty bounds the number of concurrent requests it serves based on the size of its thread pool (of course Jetty also uses a few of the threads to run its acceptors and selectors). The implementation provided here, replaces Jetty's fixed sized thread pool with an unbounded thread pool. This is going to lead to some terrible failure states when the service slows down.
What alternative model would you suggest? Green threads still introduce bugs (race conditions), and fight over resources (DB connections, filesystem, etc.)
This is just silly. What concurrency model doesn't "fight for resources and introduce bugs"?
Threads (and pointers, which you compared them to) are the abstraction at the hardware level - everything else has to be built on top of them in one way or another. Just because you have access to threads (or pointers) doesn't mean you have to make poor architectural decisions. I'd like to draw your attention to Doom Eternal which takes the thread pool model through to its logical conclusion. (https://twitter.com/axelgneiting/status/1241487918046347264) I hope you'll agree that's an example of meeting the needs of real software. (I'm sure it's not the first or only example of that approach, it was just on my mind because it came up recently.)
I tried. I honestly tried. Java is my #1 most hated programming language (maybe unless you forget everything before 8 .. and even then.. import java.util.function.BiFunction makes me want to run to a mountain)
One thing that is hurting Java today is memory usage. Conventional JVMs use a lot of memory relative to essentially everything else and this is drives up cloud bills. There are alternative JVMs and other tools, some of which is embryonic at this point, but what the world wants and what Java really needs to continue thriving is efficient memory use by the bog standard JVMs that work with everything.
That's not always true. Most of the time when Java uses a ton of memory it's because people use the default memory settings. If you tell Java to use up to 90% of system RAM, it will. Garbage collection is expensive so it will delay until memory is depleted.
This is a different GC design than V8 and Go, which use older collector designs with high overhead. They need to collect very frequently because their stop the world pauses get longer with more garbage. Javas new collectors are near constant time, even with terabytes of garbage, so it's much more efficient to wait until the heap builds up and collect much less frequently. Ironically, Java appears to use tons of ram because it has better garbage collectors.
When you configure Java GC to collect frequently, it turns out Java uses 2-3X less ram than JS, and far less than Python, Ruby, etc. It does use more than Go, about 2X. But the point is it uses a lot less ram than most other popular languages for web dev.
Unfortunately the "Java uses too much ram" is used in defence of using things like JS and interpreted languages, when in actually it uses much less if you configure it to.
Not sure that is all of it. You can try turning down the max total memory to eg: 64mb and see what you can still run. An awful lot of stuff just can't, while similar applications in conceptually higher level languages (eg: Python, PHP, etc) happily execute.
The various factors I see are:
- JVM overhead 10-15mb seems to be required just to get off the ground,
which directly relates to the JVM replicating a bunch of OS
functionality (inevitable tradeoff for portability)
- Missing value types - any complex data structure ends up with
big overhead from storing references
- Stack frames - if you need a lot of threads you need a lot of stack,
and a surprising number of threads run
Some of this will get addressed in upcoming releases, will be interesting to see how it goes.
GC and memory settings are not the only cause. There are several other factors:
* JVM overhead - all these advanced optimizing compilers and GCs have non-zero footprint. They need RAM to run their code and they need RAM to perform their tasks.
* Compiled code cache - JVM keeps both the original bytecode and generated machine code in RAM.
* OOP overhead - each object has 2 or 3 words of overhead for the object header vs zero in languages like C or Rust. Even when you don't need dynamic dispatch or object locking, you pay for it.
* Inability to compose bigger structures other than by allocating separate objects and using pointers to reference them - these pointers need space and are not cheap on 64-bit architectures. This is going to probably partially improve with Valhalla, but at this point it is mostly guessing and it has been in development for years.
* No support for packed arrays.
* The smallest unit of loadable code is a class. If you needed a single function, the JVM loads a whole class containing it, and its required dependencies. This is not only bad for memory usage but also for startup time. Unless you pay a lot of attention, it is easy to load 80% of code in order to just display a help message (this is based on a real issue I worked on - I'm not making this up). Compare that to code in languages like C - the OS loads code that gets executed.
And back to GC - I agree the default settings are often to blame, but there is a reason JVM defaults to using RAM so aggressively. GC becomes very inefficient when it doesn't have enough "room". And low pause GCs achieve their low pause goals by trading throughput. Switch from parallel STW GC to G1 and your maximum sustainable allocation rate goes down by a few times.
When I hear "Java uses too much ram," lazy reclamation of the heap and other GC overhead are not the first things that come to my mind, personally.
Java doesn't have a "struct". If you want to represent an array of 64-bit signed integers, Java has you covered with its primitive arrays. But if you want to represent an array of anything more interesting than that, (say, a tuple of a double and a long), you have to serialize and deserialize those objects to and from parallel primitive arrays or byte buffers. Because if you do the language-natural thing and use an Object array, you're paying a huge price in memory: 4 or 8 bytes per pointer in that array, plus a 16 byte Object header on each Object. And, of course, those Objects are all individual allocations, not necessarily contiguous. That's a lot of overhead!
Of course, Java programmers concerned with memory usage don't put up with this. Lots of solutions have been devised. OpenHFT's Chronicle Values[0] is one example I came across recently. But this feels like fighting with the language compared to how easy it is to be efficient with memory in C. If you told a beginner C programmer to make an array of compound objects, it's not unlikely that their array will take up exactly as much space in memory as it intuitively seems like it should. (8 byte double + 8 byte long) * 100 values = 1600 bytes in C, no fuss. If you asked the average Java programmer for that they'd give you something that would take up 3 times as much memory. And because Java makes that behavior natural, it "uses too much ram". It doesn't matter as much that it's possible to convince it not too.
What you are saying is true but even when you use optimized frameworks like Micronaut and stay below e.g. 50MB java heap usage you still end up using more RAM than what the Java heap says. Even the most optimized Java program on the planet will be beaten by a carelessly written C++ program. At least in terms of memory usage. I know it because I once wrote a React app with a C++ backend that also included sqlite. The C++ version needed 4MB of RAM at worst. Meanwhile the Java version didn't even include a database!
> This is a different GC design than V8 and Go, which use older collector designs with high overhead.
I think this information is a bit dated. Go has a highly advanced concurrent collector with very low pause times (~1-100μs). V8 also has incremental marking, concurrent marking, and parallel compaction. Its pause times are more like 100-1000μs. V8's GC has been tuned more and more to save memory (i.e. smaller heaps) because people have so many tabs open these days.
I don't agree with your pause time estimates. Most collectors are fast with small amounts of garbage in a small application. It's high allocation rates where GC time becomes a problem, and I don't see anything about Go or V8 designs to prevent very bad worst-case pause times.
Uh, most of the rant about Go GC below is out of date, they made some huge improvements from 2016-now. I'm leaving it up because someone replied to it
Unless something has changed in the last year or two, Go's GC is similar to Java's old CMS collector which is being deprecated.
Go's GC is non generational and non compacting, both hallmarks of modern GC algorithms. It's not a modern "moving" collector. It also has several stop the world phases. It's basically a design from the 70's. The GC uses old simple algorithms because the team was under a time crunch when it was developed. This may have changed, but that's how it was in 2018-2019 timeframe.
The pause times are short because GC runs very frequently. It has to because performance with large amounts of garbage is quite bad. This results in significant GC CPU overhead.
I don't know much about V8 collector besides that it is generational and compacting, so more modern than Go. But it's still a "stop the world" design. In that regard it's still closest to Javas old CMS collector.
Javas new collectors, ZGC and Shenandoah, both have near constant time stop the world phases. You can collect terabytes of garbage with only a few milliseconds pause. In V8 or Go, this would be many seconds pause time as the mark phase is "stop the world" in both.
You can find benchmarks that show one way or the other, but in badly behaving or allocation heavy applications, Java's new collectors or older G1 will perform far better. V8 and Go are dishonest about their GC performance by showing average pause times with high collection frequency. The important GC cycles are the long ones, so you really want to measure worse case pause time under load.
Under heavy load Go's design falls over. It's not compacting, not generational, and the mark phase pauses the application. IMO, it's just not a good GC. V8 is better, it is generational and compacting, but mark phase is still STW. Java's ZGC isn't generational but importantly, the mark and sweep phases don't stop execution. No matter how big your heap is and how much garbage, your GC pauses will be short
> I don't know much about V8 collector besides that it is generational and compacting, so more modern than Go. But it's still a "stop the world" design. In that regard it's still closest to Javas old CMS collector.
> ... In V8 or Go, this would be many seconds pause time as the mark phase is "stop the world" in both.
Like I said before, your information is outdated. V8 has both incremental and concurrent marking. I even mentioned it in my comment, but apparently you didn't read that either. V8 only stops the world for semispace evacuation and compaction. It doesn't compact the entire old generation at once, but decides on a per-page basis.
For Go's GC, I am going by public information presented by one of its primary designers, Rick Hudson, who has since retired.
Java's new GCs sound fantastic! It's great for the field in general. However, I would encourage you to spend less time misrepresenting other people's work and making up numbers.
V8 still uses STW for mark sweep. The benchmark image on this https://v8.dev/blog/concurrent-marking shows 50+ms pause time, quite bad compared to ~5ms or less in new Java collectors. This might be due to STW move phase though? They don't really explain, but the long pause times show that there's definitely still long STW pauses in V8.
For Go, I'm going to be a bad HN user and not read the whole article. Sorry, it's just too long for this time of night. It does appear that my understanding of Go GC is out of date. There's been many improvements in the last couple years. Some strange behavior due to not enough knobs to configure GC, but it appears to have a near constant GC pause? https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...
I'm annoyed that Google doesn't offer much benchmarking results for V8. Huge articles about improvements made with a single benchmark image. And they didn't use standard benchmarks for either so it's unclear what they're even benchmarking. The Go slides you linked include benchmarks from some guys production server he tweeted images of, a bunch of standard benchmarks but they only show % throughout improvement, and no pause times.
Well unfortunately you are still misunderstanding, so let me be more precise so we are talking about the same thing. V8 uses incremental marking (i.e. splitting mark work into smaller chunks and interleaving those chunks with mutator time) as well as concurrent marking (i.e. multiple parallel collector threads marking in the background, concurrent with the mutator). Not mentioned in the article, but sweeping of pages is also incremental (i.e. dead space reclaimed on-demand when free lists run empty) and concurrent when idle (i.e. in the background). So the statement "V8 still uses STW for mark sweep" is just wrong. Like I said before, V8 only stops the world for semispace scavenges (fast, < 1ms) and compaction (slow, ??ms), but compaction is less frequent than mark/sweep, which is incremental and concurrent.
You also misunderstood what is reported here. That 50ms main thread marking time is cumulative, meaning those 50ms are spread over the entire garbage collection cycle, split up into small increments so that the mutator (main thread) is not stopped the entire time. It's explained there in the text and illustrated in the second-to-last diagram.
> quite bad compared to ~5ms or less
Again, it is not 50ms pause, it's 50ms work, split into much, much smaller incremental pauses, typically less than 1 ms each. That number is not presented in your linked article but is pretty typical. The V8 GC needs sub-millisecond pause times because it has a soft realtime requirement in that it may end up on the critical path for frame rendering (60fps = 16.6ms).
> For Go, I'm going to be a bad HN user and not read the whole article.
FTA "...The August 2017 release saw little improvement. We know what is causing the remaining pauses. The SLO whisper number here is around 100-200 microseconds and we will push towards that. If you see anything over a couple hundred microseconds then we really want to talk to you and figure out whether it fits into the stuff we know about or whether it is something new we haven't looked into. In any case there seems to be little call for lower latency. It is important to note these latency levels can happen for a wide variety of non-GC reasons..."
TLDR: if you see pause times of more than a couple hundred microseconds, call the red phone.
Also, please note, I am just trying to provide accurate information about the collectors I do know about, designed by people I work(ed) with. I don't know enough about ZGC or Shenenadoah to confidently assert anything about their performance characteristics, but based on what I read I am actually very excited to see them make it into production. I consider advances in GC to be overall a good thing for everyone, and would encourage you to be more open to learning the advantages and disadvantages of various systems without as much derision and not try to pick sides.
Go also has simplified their heap from Java's in a notable way, you cannot set a limit to Go's heap. You can set a limit to the memory used by the Go process, of course.
I'll shamefully admit that I have been running and writing JVM based services for years and I didn't know this. I thought that the fixed memory overhead for a simple JVM service was simply higher than with CPython as a fact of life.
There are times when I'd happily trade more frequent GC pauses for a smaller per-process memory footprint. How do you find a reasonably small Xmx that doesn't lead to OutOfMemoryError exceptions?
> How do you find a reasonably small Xmx that doesn't lead to OutOfMemoryError exceptions?
That's tough to figure out. In new versions of Java, I think 14+, if you use ZGC collector it will return unused memory to the OS. Memory options vary depending on collector, but new versions of ZGC support "soft max" heap size and uncommit. Together it might be close to what you're looking for https://malloc.se/blog/zgc-softmaxheapsize
I should mention the GC situation was worse until the last few years. Until ZGC and Shenandoah came around, Java still didn't collect frequently but when it did there were long pauses. This is what V8 and Go's collectors were designed to avoid. They have more overhead from collecting frequently, but low pauses. With the new Java collectors you get the best of both worlds.
You can set heap ratios and such for older collectors to decrease Java memory use with those, but IMO you're better off using ZGC and uncommit these days
Indeed. That fact obviates this option in most cases. You have to spend time tweaking obscure, unstable knobs (the X in Xmx means Oracle is free to alter its meaning at any time) and risk either a.) serious failures in production or b.) poor results because the conservative choices necessary to avoid 'a' achieved little improvement and you wasted your time. The real world for most enterprises is a vast heard of communicating components and toying with GC switches multiplied by N things is a nonstarter.
So while you're technically correct that excessive memory use by conventional JVMs is "not always true," in practice you are wrong. That reality comes with a real cost that appears on a real bill every single month.
The cost of setting a few command line args? They're not really obscure or unstable, just different depending on the GC you use. Turning on ZGC and setting the right options is like 4 command line flags. It's very easy, the reason it's not frequently done is that nobody reads documentation and it's not the default.
Go's approach of minimal knobs leads to unfixable problems in production. Java gives more options to tune GC for your use case
> Go's approach of minimal knobs leads to unfixable problems in production.
I didn't bring up Go, but since you did the thing I see is that Go -- a much younger language -- is going places Java never has, or did so only haltingly. Caddy is a case in point. Here is Go taking on nginx, haproxy, Envoy, etc.
The people that once imagined using Java for such things have retired or moved on to other battles. No one seriously ponders attempting 'systems' tasks with Java any longer; that whole space was ceded to more efficient languages. My opinion is that Java's poor efficiency -- a big part of which is its excessive memory consumption -- is the reason for this.
That's my opinion. What I know for fact is that today, when people are making design decisions about new services and their deployment, Java is a problem; it is understood that anything implemented in Java is going to sort right to the top of the list of memory pigs in the cluster, and you can only afford so many of those.
I think you're ignoring that people don't want to "tune" their GC. They just want it to work. So instead, they are going with the obvious route of just buying more RAM. This is perfectly fine on servers, which is where Java shines. As soon as Java has to be used for e.g. CLI scripts, daemons, or situations in which multiple process instances have to run at the same time, then Java is an incredibly poor choice and there is nothing you can do about that. If you are a genius at memory optimization in Java then you'll see even bigger gains in C++ or Go or Lua or Javascript or Python. Some of these options may not be as fast as the JVM but this discussion is purely about memory usage.
> I should mention the GC situation was worse until the last few years. Until ZGC and Shenandoah came around, Java still didn't collect frequently but when it did there were long pauses.
This feels like G1GC erasure. I'll also say we've tried out ZGC and while pause times were low, it had a huge CPU overhead and the performance of our application was notably worse and we went back to G1GC. We're still on Java 11, so maybe we'll see some magic when we eventually try the newer versions.
20th October, OpenJDK 11.0.9 will be released, with full-fledged Shenandoah GC.
It has _very_ low pauses and has the `-XX:ShenandoahGCHeuristics=compact` option, which will promptly give unused memory back to the OS (thus keeping the whole memory profile low).
Thanks for the advice. I'm stuck on 11 until the next LTS release but I'll definitely be trying ZGC next year. (ZGC is in 11 but still marked experimental, and I'm conservative about making changes.)
That's probably a good decision. When ZGC was first released it didn't support class unloading, including in 11. This will lead to puzzling memory leaks in applications that generate a lot of code at runtime.
Newer versions of ZGC support class unloading and have some other performance optimizations
Depends on your application and how much memory you actually need. I would also use ZGC as it returns unused memory to the system and low ms maximum pause times for terabytes of heap.
I have never managed to get a JVM to use less than 100MB RAM. The application in question needed significantly less than 32MB of Java heap. Meanwhile equivalent lua programs can do just fine with 1MB RAM.
All OK except when you need the same level of performance for those 0.001% of requests when the GC kicks in and takes the response time outside acceptable limits. Due to this reason alone, my company is planning to move off a popular Java based API gateway and to a C++ envoy side car implemented service mesh. And I am wondering if this is really worth it.
Consider trying ZGC first. The main selling point is low worst case pause time. I've seen some tests where P99.9 pause time was less than 5ms, vs several hundred for older collectors.
In my limited tests I never saw a GC pause over 5ms. I was basically hammering a Spring Boot application with HTTP load tester.
With project Valhalla (value types), as well as the GC improvements in recent JDK releases (e.g. they are much more aggressive in releasing unused heap back to the OS), this should be a much smaller issue going forward.
On another note, I'm not aware of other freely available GCs in other languages that are able to easily scale to multi-GB/TB scale memory usage. A while ago, I benchmarked an open source key/value golang project and it performed miserably when it reached GB level memory usage.
> e.g. they are much more aggressive in releasing unused heap back to the OS
By aggressive you mean they actual do that now right? As far as I know before ZGC no gc did that and they're still back porting that feature to G1 right?
Edit: I'm actually quite pleased with ZGC I have the eclipse language server use it and my editors memory usage on average is so much lower.
I believe it has been backported to JDK14 (and JDK15 was released this past week). Also Shenandoah should be production ready in JDK15, so you might want to give it a shot.
I fondly remember the excitement around Java when it was initially released in 1995. I was in college and the Mosaic browser only recently appeared on computers in the campus library. "Applets" were positioned as the next evolution of the "Information Superhighway"--the ability to write real applications and distribute them to anyone directly over the "World Wide Web" without the need for floppy disks.
Although Java applets didn't pan out, it gave people a glimpse of the future; paving the way for Shockwave, Flash and the rich interactive web applications that dominated the 2000's. As Java pivoted to the server, it also ushered in the next generation of enterprise web applications.
Happy 25th Java! From a language many first experienced via scrolling web tickers to a rock solid server-side platform that went on to dominate the enterprise. Java will remain ubiquitous for many years to come--even if many don't even know it's there.
Slightly tangential, but due to a new job I'll have bite the bullet and learn Java.
When googling tutorials, I see the same material I found 12 years ago. A lot must have happened since then.
What's a good resource to learn Java for somebody who already knows how to program? I'm interested in ecosystem, tooling, best practices, common pitfalls etc.
Yes. As others have said, 'java is stable' which means the old stuff still works. Which in turn means that a lot of people are still using it and still writing blog posts about it. That still makes these old things often obsolete, or needlessly complex and just 'lesser than'. They don't support certain nice features or support them very badly, or have other significant downsides - 25 years of experience does lead to insights, after all.
20 years ago, you loaded your JDBC driver with `Class.forName`. You _STILL_ see this in many examples (and it hasn't been necessary for 15+ years).
These days, you:
* Use a load balancer like hikari
* Consider raw JDBC as basically nuts as far as an API goes, and you use JOOQ or JDBI. Or JPA/Hibernate of course, if you don't want SQL/want DB independence and don't think you'll need to performance-tweak queries too much.
* You use serializable transaction levels and toss _all_ the code that interacts with DBs into a lambda so that the framework can handle retries for you.
And that's just DBs. As a general trend:
Libraries tend to wax and wane. Right now spring is _very_ popular. JSP is the kind of outdated crud that is just a straight up 'do not use this right now' (even if it still kinda works). For date stuff, use java.time. Libraries in general are more focussed on configure-via-java-code, and dip more into code generation and annotations (example: JOOQ). You don't use The JSONObject API, you use jackson or gson. The list is very long.
I have no particular advice on how to know all this stuff as someone not familiar with the (modern) java ecosystem, though. Just pointing out that 'stable' doesn't translate to 'not much new in the past 15 years'.
Baeldung's content is IMO uniformly quite deficient. If you want examples of how to do things poorly, which examples may or may not compile, or if there's no other alternative, then I guess read the Baeldung article.
Try Scala instead of Kotlin, it's much more powerful, and you can safely avoid the mad Scala libraries jam-packed with symbol infix notation.
I'm the perfect case of what you sad re. old devs -- I've been using various JVM languages for ~15 years, but still didn't think of replacing JDBC :) Will check those out!
Everything is OpenJDK now, forget oracle. AWS put out LTS JRE's etc.
Try and get your employer to pay for a jetbrains IDE, IntelliJ IDEA.
Use Maven for builds. It's simple-ish.
Use Spring for frameworks. Everything's been done so you wont be first with any problems here.
And all the stuff from 12 years ago is probably what people know and do, so it's still on point.
No one does inheritance anymore composition's all the rage. Not passing judgement on that, it's just a thing.
OpenJDK is the name of Oracle's (one and only) Java implementation project (take a look at the logo at http://openjdk.java.net/). Oracle JDK is the name of the commercially supported product built from OpenJDK, and Oracle also distributes the JDK under a 100% free license (http://jdk.java.net/).
While OpenJDK has been the open-source part of the Sun/Oracle JDK since 2007, Oracle recently completed open sourcing the entire JDK, so that there are no more paid features. The JDK used to be part-free and part commercial, and now it is completely free; you only pay Oracle -- or other companies --- for support if you want it. Other companies contribute to OpenJDK as well, but Oracle still contributes ~90% of the work, and all OpenJDK builds by all vendors are licensed by Oracle. So while you absolutely don't need to pay Oracle (or Sun, as you did before) for using the JDK any more now that it's 100% open, you should at least know that Oracle is the company that (primarily) funds and develops OpenJDK.
There is so much risk associated with anything that Oracle touches, I suspect some very large organizations run the other way without really looking. Instead of Java, they switch to GoLang, and costs be damned.
The real damage that Oracle caused by their Android lawsuit, and by their JDK licencing scheme change, will reverberate for long time.
Well, Google itself heavily uses Oracle's OpenJDK internally (as do Apple, Amazon, Netflix, Facebook, Microsoft, Twitter and many, many others), and have even forked it, contributed to it, and spoke about it at a conference at the Oracle campus [1] -- all at the very height of the lawsuit -- so they clearly aren't concerned. Whatever you think of that lawsuit, the circumstances behind it were so extreme (one company copies over 10KLOC from another in order to directly compete with it in a very lucrative market) that nothing like that had never happened before nor has happened since in the software industry.
> and by their JDK licencing scheme change
The JDK licensing change was that Oracle changed the JDK from part-commercial part-open to 100% open for the first time in Java's history. On the commercial side, the change was from part-upfront, part-subscription to just subscription, which cut the price for customers by a factor of 5, I think.
What's important to remember about Java is that it's huge, and many companies make money off of it, and so companies have an interest creating FUD over Oracle's involvement. I can't speak for other parts of the company, but there's near consensus among Java users that Oracle has been a better steward of Java than Sun, both in terms of technical investment as well as licensing.
> No one does inheritance anymore composition's all the rage.
Hmmm, that is oversimplification IMHO. When OOP first gained popularity, there was a lot of emphasis on inheritance. Inheritance got misused resulting in systems that became too rigid to change over time. The proverbial pendulum has swung the other way resulting in this sentiment of "no one does inheritance anymore" but inheritance is still very powerful and productive provided that you learn how to use it correctly. Think of inheritance as an advanced feature best left for more senior engineers to use.
Never use Play imo. Especially if your focus is Java (the language). It doesn't offer a modern workflow or java idioms at all. Not to mention it's dependence on scala and sbt which is its own special hell.
Reactive is probably a mistake with loom looming around the corner.
Spring Boot has historically been resource-hungry, but that's changing quite quickly.
Modulo usual caveats about not promising anything and safe harbours and forward-looking statements, VMware is interested in Spring efficiency up to a very high level in the org chart. Watch this space.
This is almost exactly my experience. IntelliJ is great, I would also add VSCode, MSFT has always made great dev tools. Spring is the standard everywhere basically. I almost never see features past Java 8-ish being used, so your 12 years ago remark is right on.
What do you mean by this?
Startup times, or developer onboarding?
Because I've had this conversation before if the latter. To me, Spring seems like the last gasp of "Enterprise" Java. Too much is implicit and obscure (aspect-oriented programming is an anti-pattern, IMHO), too much is configured (yuck, XML).
Other than Effective Java, I recommend looking at some of the Google libraries, specifically Guava [1] and Guice [2] for dependency injection.
Java is fundamentally a slow adopter of new techniques (it just got lambdas in JDK 8), but a lot of the Google libraries fill in the gaps.
Note that if you are learning Java for Android development, that's a whole different sub-discipline. In that case I recommend the Android tutorials since most of the work is dealing with the Android SDK.
Point taken. For some reason, a lot of the code work i've encountered is tied to JDK 8. Might be due to it having LTS until 2018 [1].
It could be anecdotal, but I've found in practice vendors and companies are conservative about their JDK upgrades. I haven't seen anything prior to JDK 6 in a while, but I don't think the upgrade cycle is as fast as say, python minor version upgrades.
I would still recommend Guava for immutable collections, and for the caching classes, both of which are much better than trying to piece things together on your own.
There are a lot of features that have been subsumed into the JDK, and you should usually prefer the JDK implementation where available. Guava has deprecated the redundant functionality, so if you pay attention to your IDE you will be fine.
Guava's immutable collections are superior to Collections.unmodifiable which just wraps the collection in a delegating class that will throw an exception if any of the modifying methods are called. Guava's classes are their own implementations, this is particularly notable for Guava's ImmutableSet which has significantly better memory usage than HashSet. Partially owing to how lazily Java's HashSet is defined, which is a wrapper around Java's HashMap. Meaning that for each element in a HashSet you get an unnecessary Map.Entry wrapper, along with its references. The difference is quite noticeable if you have a set with ~400k Integers. At a certain point, you should also move on to something like Trove, but the guava immutable classes are nice in that you still get the collections interfaces.
Knowing what Guava offers is useful, but I try to avoid it, especially in libraries, because it has a history of breaking changes. The Apache Commons libraries are versioned better and a little more focused.
Protobuf (another Google Java product) also made big breaking changes in libraries between 2 and 3.
Broadly I agree, but JDK 8 was released in 2014, so while Java is certainly conservative you're overplaying it there.
The relatively recent adoption of the 6 monthly release cadence is helping a lot - particularly with the small feature additions that used to get stuck behind the release train.
As someone who recently did that after ~8 years, I found it useful to skim through Modern Java in Action. It should be pretty easy read if you've been doing more modern languages, but important to learn Java semantics and nuances. Also, I highly recommend Java Concurrency in Practice, but it's a tougher read.
That said, for most orgs, the biggest changes you might see would be in libraries and frameworks used. There will most likely be less XML, better build tools and more modular library usage than 12 years ago.
I was in a similar position a couple of years ago. I recommend the book "Core Java for the Impatient" by Cay Horstmann. The printed version is for Java 8 but there is a preview available for Java 11. It's a concise book but it covers all the important bits.
Regarding the "bite the bullet". I was also a bit afraid, but Java is a great language. Yes, it's a bit verbose but that's compensated by its amazing tooling, specially IntelliJ IDEA.
There’s nothing that makes a programming language good or bad - they all can do the job. It’s a matter of personal preference - Java might be “objectively worse” for you, but it might the best language possible for thousands of other developers.
You'll find many good resources on https://www.baeldung.com/ it regularly comes up in search results and pages appear maintained and for modern versions of libraries.
Now that I have read all the responses in this thread (thank you!) and done some more digging of my own, here are a few more insights I have picked up as a Java-outsider / newbie:
- The current release of Java Standard Edition (SE) is 15; but many applications are still using version 8 or something in between.
- The enterprise version (EE) of Java is now called Jakarta and part of the Eclipse foundation [0] Focus on Cloud Native (Kubernetes, Docker ...)
- naming schemes: Java 1.8 is just Java 8, Java 1.11 is Java 11 etc.
- the versioning cadence of Java has changed with version 10 (in 2018) from "every few years" to "twice a year", that's how we got to version 15 in such a short period of time.
- The officially recommend way to build Android apps today is in Kotlin, but there is still support for Java.
- The JDK is used for developing apps in Java, the JRE just to run those apps. The two seem to be converging: everybody just downloads the JDK. The JDK contains a debugger, a shell, a document generator a compiler etc.
- OpenJDK: the project containing the Java source, is available in two builds OpenJDK and OracleJDK (the difference seems to be in commercial support from Oracle, not in code / functionality). Oracle is the primary contributor to OpenJDK [1][2]
Not really. That’s the point of Java. It’s stable.
Check the Wikipedia page on Java history to see the handful of new features then just look for tutorials on those that you need to use. Most of these features are things you can pick up as you go.
1) It finally got anonymous functions in Java 8, so you no longer have to use anonymous classes and complicated design patterns as substitutes.
2) The distribution model changed from targeting a preinstalled JRE to bundling your own runtime with jlink and jpackage (i.e. the same as native applications and .NET Core).
Right, these are both one-day study exercises I think to come up to speed. Java's designed like this - almost everything is additive and builds on the same principles.
I'm surprised that GraalVM is not mentioned. I have strong suspicions that Oracle is lining up Graal to become the next default JVM, after watching some talks from core developers describing how fragile the Hotspot codebase has become. Graal would be a good way to encourage ML, given it has language support for Python. Perhaps Graal is just not ready for Oracle to show all of their cards at this point. Or perhaps my theory is incorrect.
GraalVM is HotSpot. The SubstrateVM used in native-image has a lot of limitations and performance that can match regular HotSpot with C2 is something you have to pay for.
HotSpot is a pretty great codebase actually. It's very easy to read compared to the CLR. The issue with it isn't that it's bad code or fragile, it's just that it's very complex, but they're reducing complexity over time by removing obsolete optimisations (obsolete, or so they argue).
The SVM codebase is also nice but it's a very different model. Over time the codebases may be merging, as Graal the compiler gradually replaces C2. But that could easily take a decade.
They still need some more time with GraalVM. Currently you need Microsoft's C++ Compiler in order to use GraalVM on Windows. They will need time to polish these things... But GraalVM performance is already on par with HotSpot - with better startup times!
It is an issue for all platforms - I just mentioned Windows, because I work on Windows. And the issue is: Why do I need a C++ compiler if I would like to develop in Java?
The biggest mistake Google did was to screw Sun and not buying it when they had the opportunity to do so, most likely hoping it would sink without a hiss.
While my career has mostly been as an AI practitioner, Java was also very good for my career. Sun had a link for a year on their Java home page to a blog article I wrote on the Java world tour so for about 10 years I was the first search hit for “Java consultant” which was nice enough.
Except for periodically updating my Java AI book [1] (5th edition was released July 2020), I don’t much use Java because most of my customers want to use a Lisp language.
Where should Java go now? I think both OpenJDK and also Oracle are doing a good job adding new features. I would vote for faster startup time; keep improving language conciseness; better data initialization literals.
Are the customers demanding work in Lisp creating new systems, or maintaining mature systems? I had assumed that the use of Lisp for symbolic reasoning and AI had largely disappeared. Is it making a resurgence?
Common Lisp is used for quantum computing research, writing educational software, web programming, semantic web, etc. It is a very general purpose language. I have written a couple of Lisp books which is probably why I get Lisp work.
I started learning Java in 1996 and it was a real revelation back then. Coming from very platform-specific C, everything felt comparatively easy. And Javadocs were amazing.
Just a few months ago I dusted off an old project from 1997, loaded it up in IntelliJ IDEA, built it, and ran it. It worked! And that's Java's best feature, it's long-term language and library stability. I worry that it is at risk now with Oracle's new 6-month release cycle.
I was at Adobe 25 years ago. And there were people running around saying "We need to re-write everything in Java! This way we can write it once, and it will run on Sun, Mac, Windows, SGI, everwhere!"
Funny how that never worked out. Even the few Java desktop apps that don't look like 30 year old SunOS apps (IntellIJ is probably the best-looking Java app), have to have substantially different versions for each platform.
A few people in the research groups tried re-writing a few apps in Java, like Acrobat Viewer, but nothing ever came of it.
I'm not sure IntelliJ is substantially different on each platform. It's basically the same app but bundled with a JVM. IntelliJ shows that Swing, even though it's old, is still a perfectly serviceable toolkit that can make competitively attractive apps in a cross platform way.
My comment is sexist, but culturally relevant from where I'm from.
Java is like your old wife/husband/spouse. It's not sexy, you probably don't enjoy much when doing things with it. But it's dependable and reliable, And, aren't you where you are now thanks to it?
Newer languages.. yeah they're sexier, more fun to play with, make people think you're cool when with them, but they might end up wasting your time :)
There's a lot of enterprise data science being done with Java/Scala and the Hadoop/Spark ecosystem.
MXNet also makes deep learning a first-class citizen in Java, but yes the research community is firmly entrenched in Python atm.
I see JDK languages as being in a decent spot for deploying ML/DL, and maybe an "emerging" language for training DL models.
Folks like Jeremy Howard have increasingly been expressing growing pains with Python and TensorFlow has been looking for something with a better type system like Swift.
If I had to take a guess, I'd say that Python is likely going to be sharing the ecosystem with a language with a better type system and performance like Julia or something on the JDK.
The Tensorflow Java project is alive and well. The next version of the API based on TF 2 is coming out soon. (Full disclosure, I'm a member of the SIG that's building it). I'm a firm believer in the utility of type systems for building machine learning projects.
I too would like to know what's going on in their mind. Given they're apparently under the impression there that the most notable contender in that space is Go. Scala isn't a bad call though, and is JVM. Java itself seems like a non-starter for doing actual data science due to its verbosity and lack of scripting.
Instead, Julia should really be first to mind when considering that question. Although I might expect the author to respond with something about the mysteries of 'production'.
A lot of that territory is well worn by various Java libraries. Not a lot of money to be made so Oracle is probably better off not bothering with anything ML or data science related, nothing here to see that would interest them in any way, nope.
Seems a bit of an overstatement - there are huge production systems doing massive ML using Spark, Hadoop, Kafka etc. In some areas these are the defacto solutions. They can't all be "not in their right mind".
I will risk that I get downvoted to hell. The data scientists in my company and pure geniuses. But. They can barely program. I wouldn’t trust them with Java, I’m sure they wouldn’t trust themselves either. Python is slow but at least it’s easy and fun. I don’t see anything else getting traction for long years.
I've been programming Java since 1998 or so, and this may be surprising to some on here - but I still like it!
Here are the upsides in my mind, and as with all these things keep in mind - nothing's perfect! Everything has tradeoffs and I am making no absolute statements. Anyway, in no particular order:
- Performance. It's possible for Java to be within 50-98% of the speed of even the most hand tuned native code, which is pretty remarkable considering the language comforts it provides.
- Backwards compatibility. I have code that's 15 years old I still use. Programming hasn't fundamentally changed _that much_ in the past 20 years, but a lot of other languages act like it's changing by the year or even month. I deeply value not spending time fixing breaking language changes or dependency problems (for the most part).
- OOP. I know we're in the cycle now where "OOP is a failure," but that's just wrong. Just like it was wrong when "OOP was the solution to everything." I don't go crazy with it, rarely use inheritance, use interfaces only occasionally, but do find it's nice to keep myself organized in terms of scope of code and responsibilities.
- Typing. I know it's not perfect, but I really lean on the type system in Java. I think this might be because I'm a bit of a messy programmer to be honest, so this helps me keep a handle on things. When I work in Javascript I find things getting out of control quickly, whereas I can make statements about the entire codebase in Java that help me stay on track.
- Tools. I've been using Idea as an IDE for 15 years or so and it feels like an extension of my mind. I can't even remember what the keyboard shortcuts are anymore, I just do them. It really helps keep the focus on the problem at hand rather than the details of implementation. We're finally coming out of the "IDEs are bad" cycle so I see more people using them again, that's good for everyone.
- Simplicity. One major thing that people have forgotten with Java is that you can write simple code with it. It feels like the number one complaint is over engineered "AbstractFactoryImpl" codebases, and I hate those too. But it's also possible to write simple, easy to follow, understandable code.
- Exceptions. I used to hate these too, then I had to write some code that _really had to work and handle everything_ and suddenly they were pretty nice.
In terms of downsides:
- It's pretty bad for scripting. A lot of that could be fixed with extensions to the standard library that provided easy helper methods. We shouldn't be constructing URL objects for anything, just do it. I'm very jealous when I see a nice Python script for doing something with data from an API.
- It does tend to want a lot of memory. This isn't as much of a problem anymore thanks to Moore's law, etc.
- Library support for new services. People tend to write a Javascript, Ruby and Python library for everything, which is super nice. Java is not as common these days.
- Some other stuff I'm probably forgetting right now, I'll update this if I think of anything else. As I said, nothing's perfect.
And in a general note: Don't underestimate the power of true expertise in a language. I find that in a lot of the cases when I'm reading about something new, it's hard for any upside to overcome deep experience. In that way, the best language for anyone is probably the one they know best, so maybe I'm biased.
Everything Java does, C# does better. Java shouldn’t still struggle with generic arrays...but it does. Java should easily have first-class pointer support by now. C# does but Java doesn’t. Checked exceptions should have long been put to bed by now...but they haven’t.
The only reason to use Java atm is interop with another JVM language (e.g. Clojure) or to not get locked into the MS ecosystem. C# is otherwise just streets ahead at this point.
I prefer the JVM over .NET primarily for one reason: Microsoft's pervasive telemetry, i.e. exfiltration of my private data, which is now part of all of their tools and enabled by default. For each tool you have to research what the trick is to disable it (when at all possible), but even then the disabling mechanism is sometimes broken (e.g. for .NET Core) or supposedly local data is sometimes accidentally sent to Microsoft anyway (e.g. for the new terminal).
Also, although C# is pretty nice, Kotlin is nicer.
Unrelated to the real topic at hand: I find telling that an antiquated software company is choosing to publish stuff as PDF. I usually refuse to open such links - my personal choice of course - because I can't see any valid reason to use it in this context. Yeah PDF for printed forms, but anything else makes no sense to me, yet it's way more used that I could imagine.
PDFs are a great file format for digitally viewing print-style publications. This document appears to be in the style of a print publication, not a web page.
Here's an example of two far less antiquated technology companies using a PDF for something that could also have been a web page, but was found to make more sense as a print-style publication: https://blog.google/documents/73/Exposure_Notification_-_FAQ...
Of course it is for printing, yet we are viewing it on the web. Is anybody actually printing any of those announcements? Or is the thought that a "printing look" would confer a document more weight in the eyes of some consumers?
The guy who wrote the most lines of JVM code ever( Prof. Odersky) , who wrote the compiler that became the first javac compiler, who added generics to java5, himself moved away from java to create scala. Once you use something like scala or even kotlin,you never wanna go back- you realize java world is just playing catchup, They are now like a cola company advertising diet cola is healthy. The functional part of java is not really functional ( allows immutability). They just are trying to survive ,and are trying to add all kind of features to java, just to play catch up. I heard they are planning to add pattern matching in the latest version.
The guys who make some excuse, like kotlin or scala is hard, can really be compared to an older car mechanic saying tesla is BS because their knowledge will become obsolete if it takes off. If you really think a language is hard , you are not supposed to be in the programming business. Also these same guys just use java to program the web. Heck, java is fast but the cost of lost developer productivity in waiting for compilation cant be justified cant be justified for the saved 100 millisecond of processing time. What matters on the internet is the perceived speed and the page load time, and there are so many other ways to achieve that. Like simply upgrading to http2. Or preloading html , js and then “hydrating” the page. The entire source code of the forum dev.to is opensource and you can visit the website to see how fast it is. And its not written in java. This is just an example. Facebooks wasnt written/ hosted on the “enterprise” java. While java is fast and is suited for low latency data processing ( like trading system, or big data), I hate that even with the love it receives, 98% of the so-called java guys just use it for web programming. - Also scala beats java at the thing java does best- fast processing. Ever heard big data processing with spArk.
I think that Java is seriously being downplayed by HN crowd.
I was pretty much exclusively a Java programmer for the first decade and a half of my career, before moving to Node and TypeScript. I don't think I could ever go back at the point. Most importantly, this is my first time where the entire code base (front end and back end) is in the same language and toolchain, and I think it is the single most important thing I've seen in years for improved team productivity. The ease with which engineers can go between front end and backend is an incredible boon that shouldn't be underestimated.
JS also uses several times more memory, is slower, and has a terrible (non existing) threading model. Yes you can run multiple instance of node or whatever, but sharing objects between them requires message passing which is orders of magnitude slower.
Until JS has a good threading model I'm never using it for backend. It's too expensive to use a bunch of single core machines to make up for it.
All of our devs use Typescript and Java daily for front and backend, the only overhead is making sure objects were passing around match on both ends. The only advantage to using the same language for everything is hiring inexperienced devs that don't know both IMO
> the only overhead is making sure objects were passing around match on both ends
Seem like is it a big deal based on the first sentence.
I've never been convinced of the single language argument. Sharing code between frontend and backend sounds good but as in practice there's little overlap... models have subtle differences, there's extra logic server side... All in all it's not very practical.
To me the most awesome part of a fully TS project is that you can use the same interfaces everywhere. If you take the time to define them for any input / output, everything is pretty much guaranteed to be sound.
> the type erasure problem is far worse than Java
Check out RunTypes [1], amazing to guard any incoming data.
1. https://github.com/pelotom/runtypes
There is big difference between "this API can blow" and "it can blow everywhere".
Don’t Node.js worker threads solve exactly this?
Which means you are second guessing the compiler, which provided you with such great guarantees.
You should do your best to minimize this kind of code.
It completely succeeded in that!
Java (well JVM) developers today can
- Write code on any of Windows/Linux/macOS
- Deploy that code on any of Windows/Linux/macOS
Not a lot of language/platforms can claim to this amount of success, let alone with such an amazing set of tools and ecosystem.
In practice, how many developers write C# on a non Windows platform? I'd say a very, very tiny minority.
On the other hand, Java is being written on all platforms and being deployed on many as well.
Plenty do - think of deploying c# web services on Linux servers / containers.
I write C# on Linux and I am basically the only person I know who does that.
Whether that effort pays off, only time will tell.
You can also compile JavaFX apps AOT for iOS! It's called Gluon Substrate, check it out.
Java really does run on a lot of stuff, even if OpenJDK itself may not.
That's what people say, but I don't see that.
The same Java code is extremely portable, from Windows, MacOS, Linux, etc, on both the server, cli, and GUI app side.
It's just that its UI libs have historically been over-engineered shit like Swing.
But much of the original late 90s hype about Java was its cross platform nature, especially in the browser, but Java applets and other in-browser Java technologies were never popular (consider GMail and other GWT apps were written in Java but compiled down to Javascript).
Java owes 90%+ of its ubiquity and longevity to its success on the server.
Applets were very much a victim of various power struggles within the browser industry, combined with Sun's general lack of competence on the desktop - for instance, their online upgrade engines have always sucked. Though in fairness, nobody got that right until Chrome.
i wonder if it was because at the time, the different DOM api in each browser was so immature, that to unify it into a single api is too big a task. The applet+blackbox region for rendering is the easiest MVP. Of course, with hindsight, that turned out to be a piece of crap.
That crown properly belongs to the UCSD P-System, which was the Java of the 1980's. It was the same idea as Java - compilation to a bytecode which an interpreter ran. It failed because the interpreter performance penalty was too high.
Java also started out as an interpreter, which made it too slow. Steve Russell of Symantec invented a JIT for it, and like the lumbering Allison-engined P-51 getting a supercharged Merlin, it brought Java to life.
You could run p-system on a lot of machines - Apple II, IBM PC, TI-99/4A, PDP11... but how would you (and why would you) distribute your code across machines with such different storage media?
I transferred files from my PDP-11 (8" floppies) to my PC (5.25" floppies) using Kermit.
I coded in UCSD-P quite a bit (and played a few games written in it, Wizardry on the Apple 2 anyone?).
But UCSD-Pascal never reached a tiny fraction of the audience that Turbo Pascal did.
Are you talking about the host OS and the fact that Turbo Pascal was Windows only, as opposed to Pascal UCSD which was a VM?
I write a lot of JavaScript (typescript included) and I have found it to be a very nice combination of things I like from both Java/JavaScript.
Admittedly I have been primarily writing Java for the last 15 years and consider JavaScript a second language for me, you may find Kotlin gives you a good reason to come back to the JVM (where it makes sense to of course).
I do it daily during most of those 25 years, developing on Windows, deploying across multiple flavours of UNIX.
In my experience, the engineers that can do that and still produce solid code are very, very rare.
In retrospect, as an early adopter of languages like Scala/Groovy, I really like how Java just waited and watched for a few years to see what was good in those languages and let them make mistakes on the way to building something stable and then adopted a lot of things that made those languages fun.
Java since 11.x onwards had been a great mix of developer productivity, stable core (other than people writing trivial projects, most people want something that lasts for years without random bugs), portability, and great tooling (specially from the IntelliJ side, as well as from the debugging side).
I'd much more openly recommend Java as a loved language now than back in 2010 (though Elixir is the new and shiny project I'm playing around with right now ;)).
But I was wrong - the most popular language by far for the programming test was Python.
For my own products I've never used anything but native (well except browser programming which was Javascript).
https://en.wikipedia.org/wiki/Functional_programming
In high-traffic environments, that ignorance punishes you. I've always felt Java and the JVM are of the mindset that you need a Ph.D. to even understand how it works or how to configure it, and if you can't get it, then you're just a bad programmer.
You need to know if you're blocking threads, if there's memory contention, and if libraries you pull in are using the forkjoin common pool (which you're likely using as a default threadpool). And when something blows up, finding the reason (even for any of the above issues) is really tough. You can use flight recorder, heap dumps and gc logs all day, but good luck navigating it all unless you're a genius. I've seen too many devs end up shrugging and hoping the issues are transient.
Even figuring out proper threadpool usage isn't straightforward. Look at the number of concurrency abstractions just to model concurrency in your system: https://www.youtube.com/watch?v=yhguOt863nw. It's ridiculous.
Lots of large tech companies "seem" to "make it work." But if my experience is at all similar, they're just relying on a handful of Ph.D.'s to hold the hands of the rest of the company when it comes to troubleshooting.
Part of the reason I fell in love with Elixir/Erlang and the BEAM is that it provides a simple (actor) concurrency model (with a single concurrency primitive, a process) and guardrails (time-slice scheduling) to prevent libraries from shooting you in the foot. OTP's observer makes finding bottlenecks a breeze.
For the web, taming concurrency feels way more important than any cpu-crunching perf gains the JVM can give you. I'm too stupid for the JVM; I'll stick to tools that take away numerous categories of complexity and get me closer to mastering my system.
Huh? Java is used in huge traffic backends, including HFT with minimal latencies acceptable all the way to Google and Twitter scale.
If anything, Java is much more fast and low level than the typical languages used for huge high-traffic services -- Rails, Python, etc - never mind about what's used in "low-traffic situations".
ANd, to that, I'd agree. I've built high traffic stuff in Java. We built it, load tested it, it was terrible. After multiple rounds of profiling, tweaking GC settings, tweaking threadpool sizes, rewriting things to be async, finding out that a client library wasn't reusing connections properly, etc, we finally had acceptable performance...that still was less than I'd gotten out of the box in similar, IO bound services, written as unoptimized Erlang.
I've seen the same troubles with alternatives, just without the amazing tools, featuresome standard library or widely-accepted conventions.
Erlang is amazing and is places concurrency in a more central position. I'm hopeful Project Loom will greatly diminish the gap while carrying legacy code forward unchanged.
Part of the reason for its success has been its strong commitment to backward compatibility, so it's to be expected that it might accumulate many ways of doing things. Python wisdom tells us this is often a Bad Thing. [0]
I imagine Java's approach to concurrency and parallelism might be quite different if it were designed today.
[0] https://wiki.python.org/moin/TOOWTDI
Probably not, actually. Project Loom's initial goal was to rethink concurrency on the JVM from scratch. What they came up with was:
* Make threads really, really cheap
* Make thread locals work better (as scoped locals)
* Add a few Executor utilities to help you control sub-tasks better (structured concurrency)
It turns out that Java concurrency is pretty damn good already. It provides all the different paradigms you might want to explore, is efficient and well specified. Meanwhile they realised that many of the alternative approaches to concurrency are in reality trying to work around the high cost of kernel threads. When you make threads really cheap, a lot of the motivation for other approaches falls away and the existing set of tools in the JDK come to the fore.
There are, however, a few things in Java's early concurrency support that make various things harder, including Loom, and we're having to put some extra effort into grappling with them.
Probably the most obvious is the fact that the language and VM requires every object to have a monitor lock that can be synchronized and waited/notified. In 1996 this was viewed as "Ooooh, sophisticated, building locking and concurrency support into the platform!" In recent years this has started to get in the way. Really only a very few objects are used as locks, but the _potential_ for every object to be locked is paid by the JVM.
It also intrudes on Project Valhalla, which is trying to define "identity-less" inline types (formerly, "value types"). Ideally, we'd want all conventional objects and inline objects to be descendants of java.lang.Object. But Object has the locking APIs defined on it, and locking is intertwined with object identity. So, does Object have identity or not? There are some solutions, but they're kind of weird and special-cased.
Another issue is that the locks defined by the language/VM ("synchronized") are implemented differently from locks implemented by the library (in java.util.concurrent.locks). Loom supports virtual threads taking library-based locks, in that when a virtual thread blocks on a lock it will be dismounted from the real thread. This can't be done with language/VM locks, so there's an effort underway to migrate the those locks to delegate to library code for their implementation. This isn't an insurmountable problem, but it's yet more work to be done, and it's a consequence of some of the original designs of Java 1.0's concurrency model.
When I think about Java concurrency today I tend to think of java.util.concurrent or the JMM. Perhaps that's odd.
As for compatibility: why is Java 8 market share so high in 2020?
I believe that the reason why Java 8 is so popular is because there were a lot of backward compatibility problems with Java 9, compounded by the fact that Java 11 (the next LTS after Java 8; both Java 9 and Java 10 had very a very short life) removed many APIs deprecated by Java 9.
https://developer.android.com/studio/write/java8-support
All libraries that matter on the Java eco-system are already on Java 11.
Worse is that Kotlin fanboys don't get it, that without access to modern Java their Java FFI is worthless, as all Java 8+ libraries on Maven Central will slowly become useless on Android no matter what.
Additionally the language cannot expose JVM capabilities, unless they had even another backend.
So it will be stuff like value types, JNI replacement, proper generics, customized JIT and SIMD on the JVM, and plain old Java 8 on ART.
First by adding their own features to the JDK, and today simply by making Kotlin the main language to program in on Android.
Android is completely unshackled from Java today. Android is compeltely
That is precisely my point.
Android has completely unshackled itself from Java development. Between its reliance on Open JDK and Kotlin, it literally has zero dependencies on Java.
If the Android team plans to rewrite all of them in Kotlin, be my guest.
Maybe they will manage before Fuchsia goes live and Flutter wipes the floor, and then everyone will be doing Dart anyway.
Have you noticed how shitty are all the languages designed at Google?
Thankfully someone that was there since Java 1.0 days bought its rights.
GraalVM would have been killed at birth.
I am also looking forward to the complete Android development environment to be running on top of Kotlin/Native, otherwise it will be so funny having to port Studio and everything else that depends on the JVM to modern versions, while Android itself is frozen into a Kotlin ecosystem + Java 8 subset.
What are you talking about?
Android developers can use Maven Central like any other Java developers without care about what JDK these dependencies were compiled with nor even whether they were written in Kotlin (most did not, obviously).
> I am also looking forward to the complete Android development environment to be running on top of Kotlin/Native, otherwise it will be so funny having to port Studio and everything else that depends on the JVM to modern versions, while Android itself is frozen into a Kotlin ecosystem + Java 8 subset.
Again, what are you talking about? Android development happily upgrades to the latest version of Kotlin without any trouble. Porting Studio? What? Do you even understand anything about any of these matters?
My point is simply that Android development today has zero dependencies on Java but you seem to have a thick chip on your shoulder and determined to spew toxic bile at Java and its ecosystem, while feeling some vague hate at Google in general.
I have zero interest in this debate, have fun tilting at these windmills.
Stating otherwise just proves that you don't know Java.
Android Studio and the complete Android toolchain runs on top of a JVM implementation, as the JVM moves forward, JetBrains will be forced to update InteliJ to take advantage of newer JVM versions, which will force Google to update all their Android development environment.
Just for kicks they are already being forced to do this,
https://github.com/robolectric/robolectric/issues/5258#issue...
https://issuetracker.google.com/issues/139013660
Again, another proof of total lack of knowledge regarding Android
Toxic bille at Java?!?
Quite the contrary, I love Java since 1996 that is my third pillar alongside .NET and C++, what I completely hate is that Google played a Microsoft move with their flavor of Android Java (aka Google's J++), helped Sun going bankrupt withering them the revenue stream from Java deployments on Android, didn't bother to rescue Sun hoping that it would close doors without a hiss, now with its Android Java forces Java developers to create special versions of their libraries tailored to Android, and has a bunch of Silicon Valley fanboys supporting their damaging actions to the Java eco-system.
What were again your apps on Play Store?
That's not really fair. The point of the Erlang language was its novel and opinionated approach to concurrency. Java wasn't trying to be like Erlang, it was trying to lure programmers by having significant similarities to C/C++.
But when it comes to remote debugging, and more specifically, a general "I want to understand what is happening in production", the ability to attach a REPL, alongside your tools, is amazing. I can insert a breakpoint, sure (if I for some reason built my production instance with debug info), but just as easily (without any debug info compiled in!), and more usefully, I can query actor state, mailboxes, etc, fire a message to a process to see what happens, etc...all the things you'd get with a REPL running locally in your dev environment, basically. Do stuff like query for internal state for a process, then call a function with it to see what happens to the data, all in isolation from the normal execution flow (since immutable data gives you a degree of safety to actually run that live code, with copies of the live data, and see what happens). I can even remotely load new code, if I want, effectively allowing me to deploy a hotfix without taking the node down. And I can do all of this in prod. All of this is, of course, super dangerous, but with great power etc etc.
If you write the service stateless it's incredible what you can achieve with a couple small instances of a default spring boot container.
Can you point to another language that has anything remotely comparable to `java.util.concurrent`? Also, Java is getting green threads by means of Project Loom.
Not sure how to interpret this comment.. If high concurrency and high performance matter, that is precisely where Java shines. The only other reasonable option would be C++ but it brings so much pain with it that Java is the way to go.
If traffic is low and performance doesn't matter (which is most sites), then sure, use whatever favorite scripting language.
The same cannot be said for Python and JavaScript, for example. At least not by default.
Or am I misinterpreting the argument?
To be fair, docker is already a pain on my machine (using Fedora 32). I gave up on using docker at some point.
What's next for Java should be relatively little change; let a language like Kotlin without all the baggage be the way ahead on the JVM. There's a remarkably good compatibility story there; way better than basically any other language ecosystem out there, that's the real legacy of Java.
On the other hand, they need to have everything. In many other languages, it's common to just use a library written in a different language. For some reason, the foreign function interface of Java seems to have been designed to be hard to use, so instead of using an already existing library, Java developers tend to go through the route of "Rewrite It In Java".
Java is big because it has been around a long time and was decent when it came out. Java was the Go of its generation. Nothing radically new but wrapped up in a way people liked and was familiar with in large part due to the success of C/C++ prior.
Whatever achieves mass adoption after Java will also be behind the times by the time that happens, and as geeks we will have moved on to whatever is newer and cooler.
I am pretty neutral towards Java as a language. My biggest issue is with the software culture of over-engineering and complicating things. Java guys seems very dogmatic about how to design software.
Very early in Minecraft's development, people were already decompiling/modifying/injecting their own mods, and a lot of frameworks (Bukkit, Spigot, etc.) emerged to provide a common API for modding.
The large modding community arguably had a very positive impact on Minecraft's early success -- Although I don't have any quantitative metrics to reinforce that point, I fondly remember early Minecraft as having a relatively technical community that tinkered with the game as a sandbox for countless custom experiences.
https://www.youtube.com/watch?v=v1wrWQcqLpo&ab_channel=Java
Sadly, in the real world, it seems that Scala is mostly relegated to the Spark world.
excerpt Nothing competes with Java. Nothing. Because Java wasn't about destroying the competition; Java was about creating a reality that otherwise did not and could not exist. It was about imaging the "what could have been", and then creating that.
And somehow it is still widely used within Google, Facebook, Amazon, Twitter...
But for a service which needs to handle 500 concurrent requests at maximum and doesn't have to deal with TLS anymore it will be fine. And there's enough of those services out there.
A lot of the Java code in bigger companies is also written based on older frameworks like earlier versions of Servlet and J2EE. Those programs will also not make any use of async mechnanisms and prefer a simple programming model instead.
https://github.com/rodrigovedovato/jetty-loom/blob/master/sr...
https://openjdk.java.net/projects/loom
Threads (and pointers, which you compared them to) are the abstraction at the hardware level - everything else has to be built on top of them in one way or another. Just because you have access to threads (or pointers) doesn't mean you have to make poor architectural decisions. I'd like to draw your attention to Doom Eternal which takes the thread pool model through to its logical conclusion. (https://twitter.com/axelgneiting/status/1241487918046347264) I hope you'll agree that's an example of meeting the needs of real software. (I'm sure it's not the first or only example of that approach, it was just on my mind because it came up recently.)
Most of the highest performance server code across many industries in written in Java. So...
I wonder what do you consider could possibly compete with Java in this space?
This is a different GC design than V8 and Go, which use older collector designs with high overhead. They need to collect very frequently because their stop the world pauses get longer with more garbage. Javas new collectors are near constant time, even with terabytes of garbage, so it's much more efficient to wait until the heap builds up and collect much less frequently. Ironically, Java appears to use tons of ram because it has better garbage collectors.
When you configure Java GC to collect frequently, it turns out Java uses 2-3X less ram than JS, and far less than Python, Ruby, etc. It does use more than Go, about 2X. But the point is it uses a lot less ram than most other popular languages for web dev.
Unfortunately the "Java uses too much ram" is used in defence of using things like JS and interpreted languages, when in actually it uses much less if you configure it to.
The various factors I see are:
Some of this will get addressed in upcoming releases, will be interesting to see how it goes.* JVM overhead - all these advanced optimizing compilers and GCs have non-zero footprint. They need RAM to run their code and they need RAM to perform their tasks.
* Compiled code cache - JVM keeps both the original bytecode and generated machine code in RAM.
* OOP overhead - each object has 2 or 3 words of overhead for the object header vs zero in languages like C or Rust. Even when you don't need dynamic dispatch or object locking, you pay for it.
* Inability to compose bigger structures other than by allocating separate objects and using pointers to reference them - these pointers need space and are not cheap on 64-bit architectures. This is going to probably partially improve with Valhalla, but at this point it is mostly guessing and it has been in development for years.
* No support for packed arrays.
* The smallest unit of loadable code is a class. If you needed a single function, the JVM loads a whole class containing it, and its required dependencies. This is not only bad for memory usage but also for startup time. Unless you pay a lot of attention, it is easy to load 80% of code in order to just display a help message (this is based on a real issue I worked on - I'm not making this up). Compare that to code in languages like C - the OS loads code that gets executed.
And back to GC - I agree the default settings are often to blame, but there is a reason JVM defaults to using RAM so aggressively. GC becomes very inefficient when it doesn't have enough "room". And low pause GCs achieve their low pause goals by trading throughput. Switch from parallel STW GC to G1 and your maximum sustainable allocation rate goes down by a few times.
Java doesn't have a "struct". If you want to represent an array of 64-bit signed integers, Java has you covered with its primitive arrays. But if you want to represent an array of anything more interesting than that, (say, a tuple of a double and a long), you have to serialize and deserialize those objects to and from parallel primitive arrays or byte buffers. Because if you do the language-natural thing and use an Object array, you're paying a huge price in memory: 4 or 8 bytes per pointer in that array, plus a 16 byte Object header on each Object. And, of course, those Objects are all individual allocations, not necessarily contiguous. That's a lot of overhead!
Of course, Java programmers concerned with memory usage don't put up with this. Lots of solutions have been devised. OpenHFT's Chronicle Values[0] is one example I came across recently. But this feels like fighting with the language compared to how easy it is to be efficient with memory in C. If you told a beginner C programmer to make an array of compound objects, it's not unlikely that their array will take up exactly as much space in memory as it intuitively seems like it should. (8 byte double + 8 byte long) * 100 values = 1600 bytes in C, no fuss. If you asked the average Java programmer for that they'd give you something that would take up 3 times as much memory. And because Java makes that behavior natural, it "uses too much ram". It doesn't matter as much that it's possible to convince it not too.
0. https://github.com/OpenHFT/Chronicle-Values
Plus many people overlook just writing a couple of native methods and be done with it.
Somehow this is how people write "Python".
Also Valhalla and Panama are around the corner.
I think this information is a bit dated. Go has a highly advanced concurrent collector with very low pause times (~1-100μs). V8 also has incremental marking, concurrent marking, and parallel compaction. Its pause times are more like 100-1000μs. V8's GC has been tuned more and more to save memory (i.e. smaller heaps) because people have so many tabs open these days.
Uh, most of the rant about Go GC below is out of date, they made some huge improvements from 2016-now. I'm leaving it up because someone replied to it
Unless something has changed in the last year or two, Go's GC is similar to Java's old CMS collector which is being deprecated.
Go's GC is non generational and non compacting, both hallmarks of modern GC algorithms. It's not a modern "moving" collector. It also has several stop the world phases. It's basically a design from the 70's. The GC uses old simple algorithms because the team was under a time crunch when it was developed. This may have changed, but that's how it was in 2018-2019 timeframe.
The pause times are short because GC runs very frequently. It has to because performance with large amounts of garbage is quite bad. This results in significant GC CPU overhead.
I don't know much about V8 collector besides that it is generational and compacting, so more modern than Go. But it's still a "stop the world" design. In that regard it's still closest to Javas old CMS collector.
Javas new collectors, ZGC and Shenandoah, both have near constant time stop the world phases. You can collect terabytes of garbage with only a few milliseconds pause. In V8 or Go, this would be many seconds pause time as the mark phase is "stop the world" in both.
You can find benchmarks that show one way or the other, but in badly behaving or allocation heavy applications, Java's new collectors or older G1 will perform far better. V8 and Go are dishonest about their GC performance by showing average pause times with high collection frequency. The important GC cycles are the long ones, so you really want to measure worse case pause time under load.
Under heavy load Go's design falls over. It's not compacting, not generational, and the mark phase pauses the application. IMO, it's just not a good GC. V8 is better, it is generational and compacting, but mark phase is still STW. Java's ZGC isn't generational but importantly, the mark and sweep phases don't stop execution. No matter how big your heap is and how much garbage, your GC pauses will be short
> ... In V8 or Go, this would be many seconds pause time as the mark phase is "stop the world" in both.
Like I said before, your information is outdated. V8 has both incremental and concurrent marking. I even mentioned it in my comment, but apparently you didn't read that either. V8 only stops the world for semispace evacuation and compaction. It doesn't compact the entire old generation at once, but decides on a per-page basis.
For Go's GC, I am going by public information presented by one of its primary designers, Rick Hudson, who has since retired.
You can argue with his slides if you want. https://blog.golang.org/ismmkeynote
Java's new GCs sound fantastic! It's great for the field in general. However, I would encourage you to spend less time misrepresenting other people's work and making up numbers.
For Go, I'm going to be a bad HN user and not read the whole article. Sorry, it's just too long for this time of night. It does appear that my understanding of Go GC is out of date. There's been many improvements in the last couple years. Some strange behavior due to not enough knobs to configure GC, but it appears to have a near constant GC pause? https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...
I'm annoyed that Google doesn't offer much benchmarking results for V8. Huge articles about improvements made with a single benchmark image. And they didn't use standard benchmarks for either so it's unclear what they're even benchmarking. The Go slides you linked include benchmarks from some guys production server he tweeted images of, a bunch of standard benchmarks but they only show % throughout improvement, and no pause times.
Well unfortunately you are still misunderstanding, so let me be more precise so we are talking about the same thing. V8 uses incremental marking (i.e. splitting mark work into smaller chunks and interleaving those chunks with mutator time) as well as concurrent marking (i.e. multiple parallel collector threads marking in the background, concurrent with the mutator). Not mentioned in the article, but sweeping of pages is also incremental (i.e. dead space reclaimed on-demand when free lists run empty) and concurrent when idle (i.e. in the background). So the statement "V8 still uses STW for mark sweep" is just wrong. Like I said before, V8 only stops the world for semispace scavenges (fast, < 1ms) and compaction (slow, ??ms), but compaction is less frequent than mark/sweep, which is incremental and concurrent.
You also misunderstood what is reported here. That 50ms main thread marking time is cumulative, meaning those 50ms are spread over the entire garbage collection cycle, split up into small increments so that the mutator (main thread) is not stopped the entire time. It's explained there in the text and illustrated in the second-to-last diagram.
> quite bad compared to ~5ms or less
Again, it is not 50ms pause, it's 50ms work, split into much, much smaller incremental pauses, typically less than 1 ms each. That number is not presented in your linked article but is pretty typical. The V8 GC needs sub-millisecond pause times because it has a soft realtime requirement in that it may end up on the critical path for frame rendering (60fps = 16.6ms).
> For Go, I'm going to be a bad HN user and not read the whole article.
FTA "...The August 2017 release saw little improvement. We know what is causing the remaining pauses. The SLO whisper number here is around 100-200 microseconds and we will push towards that. If you see anything over a couple hundred microseconds then we really want to talk to you and figure out whether it fits into the stuff we know about or whether it is something new we haven't looked into. In any case there seems to be little call for lower latency. It is important to note these latency levels can happen for a wide variety of non-GC reasons..."
TLDR: if you see pause times of more than a couple hundred microseconds, call the red phone.
Also, please note, I am just trying to provide accurate information about the collectors I do know about, designed by people I work(ed) with. I don't know enough about ZGC or Shenenadoah to confidently assert anything about their performance characteristics, but based on what I read I am actually very excited to see them make it into production. I consider advances in GC to be overall a good thing for everyone, and would encourage you to be more open to learning the advantages and disadvantages of various systems without as much derision and not try to pick sides.
There are times when I'd happily trade more frequent GC pauses for a smaller per-process memory footprint. How do you find a reasonably small Xmx that doesn't lead to OutOfMemoryError exceptions?
That's tough to figure out. In new versions of Java, I think 14+, if you use ZGC collector it will return unused memory to the OS. Memory options vary depending on collector, but new versions of ZGC support "soft max" heap size and uncommit. Together it might be close to what you're looking for https://malloc.se/blog/zgc-softmaxheapsize
I should mention the GC situation was worse until the last few years. Until ZGC and Shenandoah came around, Java still didn't collect frequently but when it did there were long pauses. This is what V8 and Go's collectors were designed to avoid. They have more overhead from collecting frequently, but low pauses. With the new Java collectors you get the best of both worlds.
You can set heap ratios and such for older collectors to decrease Java memory use with those, but IMO you're better off using ZGC and uncommit these days
Indeed. That fact obviates this option in most cases. You have to spend time tweaking obscure, unstable knobs (the X in Xmx means Oracle is free to alter its meaning at any time) and risk either a.) serious failures in production or b.) poor results because the conservative choices necessary to avoid 'a' achieved little improvement and you wasted your time. The real world for most enterprises is a vast heard of communicating components and toying with GC switches multiplied by N things is a nonstarter.
So while you're technically correct that excessive memory use by conventional JVMs is "not always true," in practice you are wrong. That reality comes with a real cost that appears on a real bill every single month.
Go's approach of minimal knobs leads to unfixable problems in production. Java gives more options to tune GC for your use case
I didn't bring up Go, but since you did the thing I see is that Go -- a much younger language -- is going places Java never has, or did so only haltingly. Caddy is a case in point. Here is Go taking on nginx, haproxy, Envoy, etc.
The people that once imagined using Java for such things have retired or moved on to other battles. No one seriously ponders attempting 'systems' tasks with Java any longer; that whole space was ceded to more efficient languages. My opinion is that Java's poor efficiency -- a big part of which is its excessive memory consumption -- is the reason for this.
That's my opinion. What I know for fact is that today, when people are making design decisions about new services and their deployment, Java is a problem; it is understood that anything implemented in Java is going to sort right to the top of the list of memory pigs in the cluster, and you can only afford so many of those.
This feels like G1GC erasure. I'll also say we've tried out ZGC and while pause times were low, it had a huge CPU overhead and the performance of our application was notably worse and we went back to G1GC. We're still on Java 11, so maybe we'll see some magic when we eventually try the newer versions.
Give it a try.
https://wiki.openjdk.java.net/display/shenandoah/Main
https://www.slideshare.net/jelastic/choosing-right-garbage-c...
https://twitter.com/shipilev/status/1308320432404168705
Newer versions of ZGC support class unloading and have some other performance optimizations
https://news.ycombinator.com/item?id=24571054
In my limited tests I never saw a GC pause over 5ms. I was basically hammering a Spring Boot application with HTTP load tester.
While not-pausing is generally a huge improvement (after several decades of GC development), now what about thrashing of CPU caches?
https://news.ycombinator.com/item?id=24571054
On another note, I'm not aware of other freely available GCs in other languages that are able to easily scale to multi-GB/TB scale memory usage. A while ago, I benchmarked an open source key/value golang project and it performed miserably when it reached GB level memory usage.
By aggressive you mean they actual do that now right? As far as I know before ZGC no gc did that and they're still back porting that feature to G1 right?
Edit: I'm actually quite pleased with ZGC I have the eclipse language server use it and my editors memory usage on average is so much lower.
https://news.ycombinator.com/item?id=24571054
Agreed.
GraalVM is showing enormous promise in this area, alongside efforts like Project Valhalla.
I think the emergence first of microservice and then of FaaSes has lit a fire underneath OpenJDK folks and others in the ecosystem.
Although Java applets didn't pan out, it gave people a glimpse of the future; paving the way for Shockwave, Flash and the rich interactive web applications that dominated the 2000's. As Java pivoted to the server, it also ushered in the next generation of enterprise web applications.
Happy 25th Java! From a language many first experienced via scrolling web tickers to a rock solid server-side platform that went on to dominate the enterprise. Java will remain ubiquitous for many years to come--even if many don't even know it's there.
When googling tutorials, I see the same material I found 12 years ago. A lot must have happened since then.
What's a good resource to learn Java for somebody who already knows how to program? I'm interested in ecosystem, tooling, best practices, common pitfalls etc.
Yes. As others have said, 'java is stable' which means the old stuff still works. Which in turn means that a lot of people are still using it and still writing blog posts about it. That still makes these old things often obsolete, or needlessly complex and just 'lesser than'. They don't support certain nice features or support them very badly, or have other significant downsides - 25 years of experience does lead to insights, after all.
20 years ago, you loaded your JDBC driver with `Class.forName`. You _STILL_ see this in many examples (and it hasn't been necessary for 15+ years).
These days, you:
* Use a load balancer like hikari * Consider raw JDBC as basically nuts as far as an API goes, and you use JOOQ or JDBI. Or JPA/Hibernate of course, if you don't want SQL/want DB independence and don't think you'll need to performance-tweak queries too much. * You use serializable transaction levels and toss _all_ the code that interacts with DBs into a lambda so that the framework can handle retries for you.
And that's just DBs. As a general trend:
Libraries tend to wax and wane. Right now spring is _very_ popular. JSP is the kind of outdated crud that is just a straight up 'do not use this right now' (even if it still kinda works). For date stuff, use java.time. Libraries in general are more focussed on configure-via-java-code, and dip more into code generation and annotations (example: JOOQ). You don't use The JSONObject API, you use jackson or gson. The list is very long.
I have no particular advice on how to know all this stuff as someone not familiar with the (modern) java ecosystem, though. Just pointing out that 'stable' doesn't translate to 'not much new in the past 15 years'.
Try kotlin instead of Java, and if you must use java, read the book Effective Java.
If you use kotlin, check out ktorm for db access, http4k for http api’s
Try Scala instead of Kotlin, it's much more powerful, and you can safely avoid the mad Scala libraries jam-packed with symbol infix notation.
I'm the perfect case of what you sad re. old devs -- I've been using various JVM languages for ~15 years, but still didn't think of replacing JDBC :) Will check those out!
OpenJDK is the name of Oracle's (one and only) Java implementation project (take a look at the logo at http://openjdk.java.net/). Oracle JDK is the name of the commercially supported product built from OpenJDK, and Oracle also distributes the JDK under a 100% free license (http://jdk.java.net/).
While OpenJDK has been the open-source part of the Sun/Oracle JDK since 2007, Oracle recently completed open sourcing the entire JDK, so that there are no more paid features. The JDK used to be part-free and part commercial, and now it is completely free; you only pay Oracle -- or other companies --- for support if you want it. Other companies contribute to OpenJDK as well, but Oracle still contributes ~90% of the work, and all OpenJDK builds by all vendors are licensed by Oracle. So while you absolutely don't need to pay Oracle (or Sun, as you did before) for using the JDK any more now that it's 100% open, you should at least know that Oracle is the company that (primarily) funds and develops OpenJDK.
(I work on the JDK, i.e. OpenJDK, at Oracle)
The real damage that Oracle caused by their Android lawsuit, and by their JDK licencing scheme change, will reverberate for long time.
> and by their JDK licencing scheme change
The JDK licensing change was that Oracle changed the JDK from part-commercial part-open to 100% open for the first time in Java's history. On the commercial side, the change was from part-upfront, part-subscription to just subscription, which cut the price for customers by a factor of 5, I think.
What's important to remember about Java is that it's huge, and many companies make money off of it, and so companies have an interest creating FUD over Oracle's involvement. I can't speak for other parts of the company, but there's near consensus among Java users that Oracle has been a better steward of Java than Sun, both in terms of technical investment as well as licensing.
[1]: https://youtu.be/DjOcfkhTZkM
Or Dropwizard if you want microservices that are more light weight and performant than Spring.
https://glennengstrand.info/software/performance/springboot/...
Or Vert.x or Play if you want to code reactive microservices.
https://glennengstrand.info/software/architecture/microservi...
> No one does inheritance anymore composition's all the rage.
Hmmm, that is oversimplification IMHO. When OOP first gained popularity, there was a lot of emphasis on inheritance. Inheritance got misused resulting in systems that became too rigid to change over time. The proverbial pendulum has swung the other way resulting in this sentiment of "no one does inheritance anymore" but inheritance is still very powerful and productive provided that you learn how to use it correctly. Think of inheritance as an advanced feature best left for more senior engineers to use.
Reactive is probably a mistake with loom looming around the corner.
It might not be the trendiest, but it's definitely the most popular (as in "in demand").
Modulo usual caveats about not promising anything and safe harbours and forward-looking statements, VMware is interested in Spring efficiency up to a very high level in the org chart. Watch this space.
Disclosure: I work for VMware.
Does maven still have issues with snapshot builds and classpaths?
I've managed to avoid Spring, everyone I know that's worked with it complains it's difficult to work with.
For microservices I like simple, single-purpose pieces like sparkjava.
What do you mean by this? Startup times, or developer onboarding?
Because I've had this conversation before if the latter. To me, Spring seems like the last gasp of "Enterprise" Java. Too much is implicit and obscure (aspect-oriented programming is an anti-pattern, IMHO), too much is configured (yuck, XML).
Each to their own I guess.
Java is fundamentally a slow adopter of new techniques (it just got lambdas in JDK 8), but a lot of the Google libraries fill in the gaps.
Note that if you are learning Java for Android development, that's a whole different sub-discipline. In that case I recommend the Android tutorials since most of the work is dealing with the Android SDK.
[1] https://www.tutorialspoint.com/guava/index.htm
[2] https://www.baeldung.com/guice
Interesting use of the word 'just'. At the risk of making readers feel old... Java 8 was released _6 and a half years ago_.
It could be anecdotal, but I've found in practice vendors and companies are conservative about their JDK upgrades. I haven't seen anything prior to JDK 6 in a while, but I don't think the upgrade cycle is as fast as say, python minor version upgrades.
[1] https://en.wikipedia.org/wiki/Java_version_history
As for guice, my preference for reflective runtime injection is Weld since it's the standard reference implementation.
There are a lot of features that have been subsumed into the JDK, and you should usually prefer the JDK implementation where available. Guava has deprecated the redundant functionality, so if you pay attention to your IDE you will be fine.
I did forget about the cache's though! Good call.
Protobuf (another Google Java product) also made big breaking changes in libraries between 2 and 3.
The relatively recent adoption of the 6 monthly release cadence is helping a lot - particularly with the small feature additions that used to get stuck behind the release train.
That said, for most orgs, the biggest changes you might see would be in libraries and frameworks used. There will most likely be less XML, better build tools and more modular library usage than 12 years ago.
Regarding the "bite the bullet". I was also a bit afraid, but Java is a great language. Yes, it's a bit verbose but that's compensated by its amazing tooling, specially IntelliJ IDEA.
- The current release of Java Standard Edition (SE) is 15; but many applications are still using version 8 or something in between.
- The enterprise version (EE) of Java is now called Jakarta and part of the Eclipse foundation [0] Focus on Cloud Native (Kubernetes, Docker ...)
- naming schemes: Java 1.8 is just Java 8, Java 1.11 is Java 11 etc.
- the versioning cadence of Java has changed with version 10 (in 2018) from "every few years" to "twice a year", that's how we got to version 15 in such a short period of time.
- The officially recommend way to build Android apps today is in Kotlin, but there is still support for Java.
- The JDK is used for developing apps in Java, the JRE just to run those apps. The two seem to be converging: everybody just downloads the JDK. The JDK contains a debugger, a shell, a document generator a compiler etc.
- OpenJDK: the project containing the Java source, is available in two builds OpenJDK and OracleJDK (the difference seems to be in commercial support from Oracle, not in code / functionality). Oracle is the primary contributor to OpenJDK [1][2]
[0] https://jakarta.ee/
[1] https://www.marcobehler.com/guides/a-guide-to-java-versions-...
[2] https://openjdk.java.net/faq/
Not really. That’s the point of Java. It’s stable.
Check the Wikipedia page on Java history to see the handful of new features then just look for tutorials on those that you need to use. Most of these features are things you can pick up as you go.
1) It finally got anonymous functions in Java 8, so you no longer have to use anonymous classes and complicated design patterns as substitutes.
2) The distribution model changed from targeting a preinstalled JRE to bundling your own runtime with jlink and jpackage (i.e. the same as native applications and .NET Core).
For Spring, Jakarta EE, Micronaut or Quarkus the docs of their respective websites is enough.
HotSpot is a pretty great codebase actually. It's very easy to read compared to the CLR. The issue with it isn't that it's bad code or fragile, it's just that it's very complex, but they're reducing complexity over time by removing obsolete optimisations (obsolete, or so they argue).
The SVM codebase is also nice but it's a very different model. Over time the codebases may be merging, as Graal the compiler gradually replaces C2. But that could easily take a decade.
Java and GraalVM come from two different parts of the company. They're aren't done by the same people.
No need for xcode on os x or gcc on Linux, or clang on bsds?
A significant reason for Java to be relevant even today is Android.
But Java is the majority language on the back end too.
Except for periodically updating my Java AI book [1] (5th edition was released July 2020), I don’t much use Java because most of my customers want to use a Lisp language.
Where should Java go now? I think both OpenJDK and also Oracle are doing a good job adding new features. I would vote for faster startup time; keep improving language conciseness; better data initialization literals.
[1] https://leanpub.com/javaai
Just a few months ago I dusted off an old project from 1997, loaded it up in IntelliJ IDEA, built it, and ran it. It worked! And that's Java's best feature, it's long-term language and library stability. I worry that it is at risk now with Oracle's new 6-month release cycle.
Funny how that never worked out. Even the few Java desktop apps that don't look like 30 year old SunOS apps (IntellIJ is probably the best-looking Java app), have to have substantially different versions for each platform.
A few people in the research groups tried re-writing a few apps in Java, like Acrobat Viewer, but nothing ever came of it.
It's rare for any software to achieve a kind of stable sustainability that allows it to continue development for as long as 25 years.
It gives me hope for some of the big open source projects - hope that maybe, eventually, all the bugs will be fixed - even if it takes decades.
Java is like your old wife/husband/spouse. It's not sexy, you probably don't enjoy much when doing things with it. But it's dependable and reliable, And, aren't you where you are now thanks to it?
Newer languages.. yeah they're sexier, more fun to play with, make people think you're cool when with them, but they might end up wasting your time :)
Sorry to hear about your marriage going bad. Maybe some counseling will help?
Based on the current state of affairs oracle and the jvm seem very far away from mainstream ML
MXNet also makes deep learning a first-class citizen in Java, but yes the research community is firmly entrenched in Python atm.
I see JDK languages as being in a decent spot for deploying ML/DL, and maybe an "emerging" language for training DL models.
Folks like Jeremy Howard have increasingly been expressing growing pains with Python and TensorFlow has been looking for something with a better type system like Swift.
If I had to take a guess, I'd say that Python is likely going to be sharing the ecosystem with a language with a better type system and performance like Julia or something on the JDK.
https://github.com/tensorflow/java
Instead, Julia should really be first to mind when considering that question. Although I might expect the author to respond with something about the mysteries of 'production'.
The book below provides a nice overview of using Java for DS and ML
https://learning.oreilly.com/library/view/data-science-with/...
Here are the upsides in my mind, and as with all these things keep in mind - nothing's perfect! Everything has tradeoffs and I am making no absolute statements. Anyway, in no particular order:
In terms of downsides: And in a general note: Don't underestimate the power of true expertise in a language. I find that in a lot of the cases when I'm reading about something new, it's hard for any upside to overcome deep experience. In that way, the best language for anyone is probably the one they know best, so maybe I'm biased.If you need to couple state with logic, OOP is what you want.
The only reason to use Java atm is interop with another JVM language (e.g. Clojure) or to not get locked into the MS ecosystem. C# is otherwise just streets ahead at this point.
Also, although C# is pretty nice, Kotlin is nicer.
Re checked exceptions: they're great. Document how your code can go off the rails or prevent it.
The history of Java in a nutshell.
Disclosure: former SUN micro dude who was there at it's launch and just don't have any love for its use anymore.
Here's an example of two far less antiquated technology companies using a PDF for something that could also have been a web page, but was found to make more sense as a print-style publication: https://blog.google/documents/73/Exposure_Notification_-_FAQ...