JobRunr: A library for background processing in Java


96 points | by mooreds 396 days ago


  • rdehuyss 396 days ago
    Hi all, I'm Ronald - the creator of JobRunr.

    First of all, thanks to @mooreds to post JobRunr on HackerNews.

    Second of all - I read some claims that being in the 'job scheduling' business is easy money. I would like to point out that's not really the case.

    With JobRunr being open-source and more successful than I ever could imagine, this brings along a lot of stress. If you make a mistake (which I did in V6) the whole world starts to see it. I also try to keep the amount of open issues really small as these things linger in my head and also give me stress.

    Anyway, this to say that I'm now able to provide my family with food but I'm still not break even (meaning if I just had freelanced as before, I would have more money in my bank account).

    But, I can now work on something I love.

    P.s.: it's indeed LGPL but this is also the case for hibernate. It means you should only open-source if you're touching part of the JobRunr code, not if you just use the lib.

    See also

    • upghost 395 days ago
      You had me at “I assume you’re not interested in the marketing fluff”
    • mperham 395 days ago
      Congrats Ronald, welcome to the very exclusive club of commercial job system authors.
      • rdehuyss 395 days ago
        Thanks, @mperham. I'm curious how it all will go!

        Enjoying the ride for the moment - it's wilder than I thought. I must confess that all the visits on the website via HN gave me quite the adrenaline rush :-).

  • therealrootuser 396 days ago
    Seems like these job scheduling systems are a dime a dozen these days. Since we're an AWS shop, eventually my team ended up just building a system based on EventBridge and Fargate, killing off a previous system built on top of Quartz. Scheduling is all handled via Terraform. It's been solid for several years now, and costs next to nothing to operate. We can parallelize as much or as little as we want.

    At the end of the day, I don't want more to run more dedicated boxes for yet another jobs systems. I just want to hand off a container to the ether and say "please run this container until it stops, and do this once an hour or once a day." I don't want to get alerts in the middle of the night telling me that the Quartz scheduler has had some esoteric failure, and I don't want Jobs A, B, and C to get killed because Job D started doing something dumb.

    Having a nice UI is cool, but I would rather not have more servers and relational databases and Java-cron libraries that can do dumbness in the middle of the night.

    • sverhagen 396 days ago
      Within Java, though, Quartz has ruled for years, it's aged, its website has been reorganized a few times so that half the search results end in dead links, it was time for a new contender. But my fear is that someone else takes the crown, with another business opportunity in mind, which is likely to fizzle too, and then the cycle just repeats. Another thread here was saying this is easy money, but are open source or open core or whatever companies really all that often a slam dunk?
      • manigandham 395 days ago
        > "are open source or open core or whatever companies really all that often a slam dunk?"

        No, they're often not. Many struggle to ever make a profit.

    • kernelbugs 396 days ago
      Any tips, tricks, or resources for getting started using Fargate for one-off or recurring jobs? I have terraform setup and managing AWS resources, but every time I look into Fargate it seems like guides point towards running webapps instead of diverse jobs.
      • jamesfinlayson 396 days ago
        I had a quick look at something I've got (not written by me) and it looks like you create an EventBridge rule with a schedule expression and create an EventBridge target (which can include an ECS task:
      • inkyoto 396 days ago
        You can use the aws_appautoscaling_scheduled_action terraform resource to create a scheduled scaling policy action to mimic a scheduled Fargate container fire-up, e.g. from zero Fargate container instances to one or however many are required, and then back down to zero.
      • shpongled 396 days ago
        I would look into AWS Batch - it works pretty well for running diverse jobs. I have a few jobs that are triggered by S3 uploads that run for 1-30 minutes, and other jobs that run for ~hours. All on Fargate
    • irl_chad 395 days ago
      We came to the exact same conclusion. EventBridge time triggers a Fargate task. The job automatically terminates the process after execution, so the container shuts down and all is good.
  • dvt 396 days ago
    This is the kind of business that is extremely simple, very boring, and heck, even easy to implement, but ends up making the owners 6 figures MRR with a bit of marketing and networking elbow grease. The perfect software lifestyle business.

    I love the splash page too: simple and to the point. They aren't saving the world with AI, they're just making better cron jobs.

    • mooreds 396 days ago
      > they're just making better cron jobs.

      Seems to be good money in job running software. See Sidekiq in the ruby world:

      • hardwaresofton 396 days ago
        Yup this was my first thought as well -- they're creating sidekiq-for-java. Lots of enterprise money to be scooped up, I'd assume they have a potential to make even more (especially if they're at it as long as sidekiq has been).

        That said, tech like Temporal and other workflow managers do exist now... But maybe most people won't choose them because jobrunr is just an `mvn install` away.

  • TimTheTinker 396 days ago
    Reminds me of Hangfire on .NET. I haven't used it since a previous job in 2016, but it was easily one of my most favorite tools. You can have it serialize scheduled tasks to Redis or SQL Server -- you can get the same durability guarantees the underlying storage mechanism has. The API couldn't be more simple to use:

    Being able to reliably fire-and-forget or schedule background tasks from a web app can be really powerful.

    • nirav72 396 days ago
      Hangfire was definitely a game changer in the .net world back in the day. One of the most difficult things to do in .net before .net core was long running backgrounds jobs from & IIS without the apppool prematurely killing it. The other option was MSMQ and offloading the task to another process like a windows service. Just allround a major pita.
  • ysleepy 396 days ago
    Interesting to see this.

    I tried it in 2020 and was not very happy with it.

    Serialization was (is?) deeply embedded in the API and my use case didn't need that and it was a large burden with no upside in my application. Then there were fluent builders which instead of collecting parameters just executed on them and it made it highly order dependent with possible invalid states without any indication of why.

    I'd love a lightweight job scheduler with metadata and a dashboard, but with less magic in the java world.

    Maybe I should give it a go again, maybe it has changed.

    • jmartrican 396 days ago
      if you give it a go, post the link. I'd be interested.
  • samsquire 395 days ago
    This is an interesting space.

    I think it is interesting that job scheduling, dependency graphs, dirty refresh logic, mutually exclusive execution are all relevant in the same space.

    I recently working on some Java code to schedule mutually exclusively across threads. I wanted to schedule A if B is not running and then A if B is not running. Then alternately run the two. I think it's traditionally solved with a lock in distributed systems.

    I think there is inspiration from GUI update logic too and potential for cache invalidation or cache refresh logic in background job processing systems.

    How does JobRunr persist jobs? If I schedule a background job from an inflight request handler, does the job get persisted if there is a crash?

    • manyxcxi 395 days ago
      It does. Primarily, worker threads query the DB for jobs to pick up. They can write state back when running a job, and they can have configurable (per job type) retry rules.

      So in your case, the crash would likely leave it in an open/running state if it was already picked up, at which point timeout/retry rules would kick in after a restart.

      If the job wasn’t running yet, just queued, then it would be business as usual upon restart.

  • zmmmmm 396 days ago

        Don't bother your legal team.
        We're not another SaaS company and we don't have access to your data.
    I love it. (even if that's not quite enough to completely ignore legal in my org)
  • evenh 396 days ago
    db-scheduler is also worth checking out:
    • sea-gold 396 days ago
      db-scheduler looks great (haven't had the chance to directly try it myself) and is Apache 2.0 licensed.
  • victor106 396 days ago
    How is this different from Quartz or Spring Batch?
    • duttonw 396 days ago
      pretty gui and no tooling to re-submit failed jobs.
  • xvilka 396 days ago
    Something simple as that should not require tons of RAM and CPU because of Java. Writing it in natively compiled language would produce a lean product - Rust, Go, etc.
    • RhodesianHunter 396 days ago
      Java frequently outperforms Go and Rust. It has a far more mature GC than Go and libraries that have no parallel in either (netty, caffeine).

      Almost every company doing real time/stream processing at the scale of TB/hour or greater is doing so by relying on Apache Kafka, written in Java and Scala.

      I've personally been a member of a team that wrote several services (network collectors, observability processing) in Go/Rust/JVM (we preferred Kotlin) in parallel for performance comparisons and found the JVM services to show much better throughput.

      Your perspective seems quite outdated. Possibly from before Go or Rust even existed?

      • Capricorn2481 395 days ago
        Not a Rust user, but I would like a citation on any context in which Java outperforms Rust
        • RhodesianHunter 394 days ago
          Any long lived process with constant allocation of many small or medium sized objects where your primary performance concern is throughput (such as stream processing).
      • xvilka 395 days ago
        Regarding Kafka (and other Apache Java tools, e.g. Hadoop) - they are often slower[1] than standard Unix tools.


        • xmcqdpt2 395 days ago
          1. Kafka is not really related to Hadoop in any way? Standard unix tools are also way faster than postgres, but we don't use databases or Kafka because of their batch processing performance.

          2. Hadoop is used for batch processing and it can be kind of slow, that's true. However the whole point of Hadoop is that you can operate on actually big data, not the 2GB database in the blog post. If it fits on one HDD then you don't need Hadoop and it will just slow you down!

        • manigandham 395 days ago
          That has nothing to do with Kafka, or Java as a language.

          Distributed big data processing systems need big data to actually be useful. Small data that fits on a single machine can also be processed on a single machine, which will always be faster than using a cluster with orchestration, distribution and network overhead.

        • RhodesianHunter 395 days ago
          I don't think you know what Apache is.
    • agilob 395 days ago
      Here's an alternative written in Go. It can't even handle biannual time changes correctly, so I had to rewrite microservice project to Java with Quartz :)
    • kitd 395 days ago
      Writing it in natively compiled language would produce a lean product - Rust, Go, etc.

      ... Java?

      You can natively compile Java these days.