Ask HN: Do you use automated tools to create APIs, or do you code them manually?
Do you use some sort of framework/tool for creating the APIs needed for your product/service/application/ecc, for example (https://loopback.io/), or do you code by hand?
APIs are interfaces (it’s right in the name!) and should never be directly tied to implementation because:
1. the interfaces must remain stable to the outside world that relies on them
2. They select what underlying resources and functionality is accessible by outside users, and what is hidden. A lot of your internal implementation is either a mess, “temporary”, insecure or intentionally internal.
3. They control access to the internal application through authentication, authorization, security, and translating data in both directions.
4. When the internal representation changes, they map the new implementation to the old interface to ensure the system remains reliable to API consumers.
5. They offer migration paths when change is necessary
That being said...
Auto API generators are really useful for internal systems where you control the underlying system, the API, and all systems relying on the API.
They are also useful to build an initial API that you plan to fork.
Yeah I agree with your stance, but not the conclusion. The way that gRPC (and many other systems) handle this is beautiful and the way all APIs should be built: your API is a specification, not code, so you start with the spec (SDL), then generate the adapters your implementation needs to plug into it.
This helps elevate changes to the API itself; you can easily write automated systems which detect changes to the specific SDL files. Or, the way companies like Namely [1] do it, keep those SDLs inside a separate repo, then publish the adapter libraries on private npm/etc to be consumed by your implementation.
This has been around since the early 90s or previous. ONC-RPCs did this where you define the interface file and it generates the client and server stubs for you.
NFS is based on this, as with other services. Conceptually it’s exactly the same, with some underlying differences.
Going even further back in history, ASN.1 is also like this. It's a description language for data structures, and there are separate representations that can be derived from them. It's sort of like JSON, JSON Schema and Protobuf in one.
TLS certs are encoded in ASN.1 DER, for instance, and LDAP messages are encoded in ASN.1 BER.
I was reading the the OP's question as generating the API from the implementation. Your point about generating the implementation (i.e. a Proxy interface) from the API spec is right on.
Thanks for the Namely case study as well. It was timely reading. :)
1. my data model ( database schema ), because it gives you good questions to ask regarding the business side of your problem and let you go very deep just by asking « for each a, how many bs can we get »
2. my external api. Because it requires you to « dumb down » your problem, and see which part of your model you want to expose and how.
3. I actually start coding my business process, and then bind them to the model on one end and api on the other
If i were to use code generation tool, it would need to generate both the db and the api stubs, together with the correct information exposure. I’m not aware of any tool that would let you do that.
Mostly, yeah. It is not perfect but we have a helper library we have written that handles most CRUD operations for objects and Connexion does the validation.
Respectfully, I somewhat disagree. APIs are indeed interfaces which should be seen as specifications and should not change. The problem is assuming that the API specification would be generated from the API implementation. The dependency is pointing the wrong way.
I think we agree; if the API spec generated an implementation (a Proxy actually), it is much more stable, and the API endpoint can be an off-the-shelf library that adapts as protocols improve (e.g. XML -> JSON -> whatever)
This is not an XOR question. Both are valid for different APIs. I start with problem and design the right solution to the problem.
Often I have a simple problem where I can write a simple clean API quickly by hand. Generation is a negative, generated APIs tend to be complex and hard for the user to read.
Sometimes my requirements need something that a tool does better. For example protobuf gives me an efficient over the wire API that can be used in multiple languages: I'll let protobuf generate those APIs as I can't do better by hand (though we can debate which tool is better for ages).
Sometimes I have a complex situation where I'll write my own generator. For example I once made a unit system generator for C++: it was able to multiply light-years by seconds and convert to miles/fortnight - no way would a handwritten API support all the code needed for that but with generation it was automatic (why you would want to do the above is an exercise for the reader). The API was easier to understand than boost's unit system (APIs are about compromises so I won't claim mine is better)
In a few projects where we had a specific experiment we needed insights from we ran PostGrest[1]
Basically you create your tables and run PostGrest. Bam! You have an http interface / api for your database. We would then create light wrappers around those that took on specific responsibilities - security, audit etc. The wrapped apis is what we exposed publicly.
This may not sound all that helpful but it made the bit we implemented unbelievably tiny. As a plus, we found that a Java application that exposes an endpoint and calls an endpoint is fast to start / stop because it doesn't mess around with DB connection pools.
I've also used PostgREST successfully. It was first meant as a tool for rapid prototyping, but it worked so well that we kept it. We ended up separating the database into a schema called api, consisting only of views of the core domain tables available in the data schema. PostgREST exposed the api-schema only. This way we could model a stable interface and vary the low-level details of the domain tables. Writes were handled by instead-of triggers on the views. So far this has been the quickest way of building an API that I know of.
How do you handle versioning? If you add a new required field to one of the tables (perhaps even a field that doesn't have a default value), how do you make sure the consumers of your old API keep working?
That sounds like a good balance. I've looked at postgrest in the past but the thought of writing my auth logic in SQL and relying on row-level perms made me sweat too much.
At work (enterprise stuff) we've grown tired of duplicating 1000's lines of boring CRUD stuff and turned to code generation. Which is so much better.
The workflow now is:
- think really hard for 10 minutes about the business problem,
- describe it into our meta language (typed structs, UML-like, really simple),
- instantly click'n'build a whole set of API endpoints down to SQL create/alter/drop statements, along with full up to date documentation,
- get excited to be able to deliver so much stuff to customer in no time,
- aaand finally receive a requirement update ('the last one I promise') and send I-love-you letters back in time to our old-selves for such a nice malleable framework (which I dubbed The Platform).
So the idea is that Hasura is a stateless layer on top of Postgres that generates all the necessary GraphQL schema/queries/mutations/real time subscriptions for doing CRUD based on the Postgres schema. If you change the tables (either via the Hasura admin or some migration system) it all adapts automatically as you'd expect. It can use a remote Postgres DB, you don't need to run the API and DB in the same machine.
Performance is fantastic. Hasura is very efficient in terms of speed and memory consumption. Even with a free Heroku dyno you should get thousands of reqs/s.
On top of direct data from tables you can also read Postgres views. Essentially you can read a custom SQL query from GraphQL.
Hasura can also integrate external GraphQL schemas via a mechanism it calls "stitching". The idea is that you can point remote GraphQL schemas to Hasura (on top of the current one from Postgres) and it will serve as a gateway of sorts between all your GraphQL clients and servers.
Hasura does not include authentication, but it's very easy to integrate with your current system or with services like Auth0 via JWT.
Hasura also includes a powerful fine grained role-based authorization system.
Whenever anything happens you can configure Hasura to call a URL (webhook) to do something. Maybe a REST endpoint or a cloud function. This is usually the way to integrate server side logic.
The only problem we've found is integrating Hasura with our current authorization system. Our users have multiple roles and we have no way of deciding which is the current role. Hasura requires a single role to be passed to its authorization system on the request headers. This is something that is being worked on AFAIK.
Their youtube channel has lots of little videos showcasing all the functionality.
I've created a few projects using the Node/Express ecosystem and have so far loved it. I'm starting to branch out and learn Go now. Can you discuss/compare your experience working in both ecosystems?
We switched from a small Node Hapi monolith to Go at the end of 2017. The main reason to switch was that we wanted to get types to get more explicit code. We considered a number of options (TypeScript, .NET Core, Dart, etc) and ended up picking Go because it's a nice simple language and the performance is great over Node.
We reduced memory usage by 80% over Node. We never had a performance bottleneck with Node either but it feels nice to be running on the smallest Heroku dyno and knowing you won't need much more for at least a couple of years.
As for the developer experience we vastly prefer Go over JavaScript. It's more tedious at times but there is no more ambiguity. We love that we barely need any dependencies. Moving from JS to Go was extremely easy as all devs in our team are polyglots and Go is pretty simple. I don't know how easy it would be for a JS only dev, but I imagine it wouldn't be too hard.
When using NPM/Node/JavaScript it seems there are always hidden dangers, probably more in the front end than when doing backend Node. With Go there are no surprises, everything feels solid and predictable.
After about 2 years with Go we are still happy with the decision.
Thanks for the detailed response! I was definitely going to ask 'why not TypeScript?' if your issue was mainly types, but you beat me to it! I reached similar conclusions regarding the benefits you witnessed switching to GO; it's nice seeing them spelled out.
Nobody remembers anymore SOAP it seems haha It's funny but all those new documentation and code generators for Rest were largely invented in SOAP messages before.
It doesn't make sense to send SOAP messages to browsers but I cringe every time I find myself with a vaguely documented Rest API when integrating systems.
I similarly cringe at vaguely documented APIs, but being a young developer, my experience with REST has been better. For instance, I've consumed a SOAP API where the WSDL specification was primarily a method named "Magic" that accepted a string "Method" and six string-typed parameters, "Parameter1" through "Parameter6".
I think the key is to pick a documentation tool that the team will actually use.
Disclaimer: This is not based on real world knowledge. (To be honest I have practically no "real world knowledge".)
That being said, I just finished a school project where we (our class) were divided into small teams and we had to implement small RESTful web apps. My team chose to kick it off by grabbing two people from the front- and backend team and writing an API specification by hand. It was a breeze and we were done in a few hours. After that front- and backend (almost) never had to interact with each other again until the end of the project where we had to stick the two things together.
This probably isn't applicable to real-world cases where the requirements are ever-changing and everyone's a full-stack dev (or you don't have a team at all) but I found this sort of separation quite useful for this project. (It kept team sizes managable, different kinds of devs were in seperate teams, we didn't have to wrestle with any tooling that would halt the whole project.)
I see no problem with generating client/server boilerplate from spec though (like Swagger does, I think).
This sort of philosophy could be useful when designing a public-facing API though. In that case you need a well-formed implementation-unaware API documentation and mapping it out upfront by hand could save you lots of trouble.
Use grpc. With one definition file you can generate:
1) Client code in various langs.
2) Server code in golang of python or nodejs.
3) Swagger.
4) Rest interface if you want to.
5) Gorm definition of you use golang gorm.
We've been using gRPC and Protocol Buffers for the last couple of years. We write APIs using the Protobuf interface definition language, then generate client libraries and server side interfaces. Then it's a matter of implementing the server by filling in the blanks.
I love protobuf for this reason. Personally I've opted for Twirp instead of gRPC, as gRPC has a lot of baggage, and streaming is really not necessary for me.
We've had to drop-in-replace, or add a validation or access layer service for something, and using protobuf has made this super easy. Anything interacting with that service is none the wiser.
gRPC has been solid for us on the JVM, and streaming has been great when consuming from Apache Flink jobs, integrating with message queues, receiving push notifications and so on. For async work it's useful to have more than just request/response.
I've been playing the FoundationDB Record Layer for a personal project of mine, and with this setup I can generate not only the API implementation, but also the models used by the persistence layer:
FoundationDB Record Layer uses protocol buffers out of the box. They leverage the fact that you can evolve protobuf messages in a sane way. That's their equivalent of doing database schema migrations.
Both. And if your external clients rather consume a JSON/REST API, it's easy to derive that from a gRPC API. You can do it right there in your protobuf definition. It's actually easier to do it that way than to deal with OpenAPI's wall of yaml.
(very very small team) We have some handmade scripts in place to generate basic crud endpoints, generated files are then adjusted to the specific needs, but it goes a long way in keeping things organized and consistent with very little effort.
In my case I like the end product to be code. I use snippets/generators to create components (models/controllers/middleware) then modify as needed.
Having used loopback before, it's a quick way to get an api up and running, I personally struggle with injecting logic into endpoints/writing custom endpoints.
If the code's "all there", I know where to look. If I have to intercept hooks it adds an extra layer when searching.
Summary, loopback has been great for creating APIs where all I care about is crud, but for larger projects I stick with snippets/generators so I can extend easier later.
Lately, I’ve been getting back into Spring Boot. Spring Data REST automates a lot of the CRUD endpoints, with easy enough configuration and customization. I’ve been declaratively securing it all with Spring Security.
I prefer to code one level deeper and I mostly use plain Spring MVC controllers. That way I can still have spring security for the endpoints but it keeps the endpoints more decoupled from the repositories.
I typically have a repository generated by Spring Data, a small service layer with business logic on top of those and then an MVC controller that only talks to the service layer, never the repositories.
Each controller also has its own DTO class(es) for request bodies and responses and a small converter between DTO and entity. Kotlin extension methods make it easy to add the toDto() method onto the entity so a typical controller will fetch the entity from the service and return entity.toDto().
Kotlin, Spring Boot and Spring Data are amazingly well suited for this.
Spring Framework and spring boot in particular have made enormous progress in recent times and combined with the performance of the JVM it’s one of the best ecosystems to do this in.
Also, you could use projections in place of DTO’s.
You don't really need DTOs because you can use projections and set a default projection to be used when that entity type is returned in a collection. Any entity fields that should never be exposed can be annotated with @JsonIgnore. And then if you need endpoints that aren't CRUD, you can build those the usual way.
For personal projects I'll hand code them (usually) because I like thinking about API design and API UX.
For professional stuff... it really depends. I like GRPC but codegen needs team buy-in... It can quickly make a fast development loop hurt if done poorly. Doubly so if IDEs are involved for some users and the IDE is constantly updating it's caches of types and interfaces. I've just seen it turn into a hot frustrating mess very quickly.
We tried writing OpenAPI docs to implement a contract-first development workflow, with the idea that backend & frontend/mobile engineers would agree on the API interface by discussing OpenAPI changes in a pull request, and only then start implementing it (on the backend side) and using it (on the client side).
This didn't pan out well, because it turns out OpenAPI isn't very easy to read, especially when you're reviewing a diff in a pull request. We didn't get the engagement we were looking for in pull requests.
We've since invested in building a simpler, human-friendly API description language based on TypeScript, which exports to OpenAPI 3. It's still early, but we've got a lot of positive feedback and quick adoption across the company (50 engineers).
You can check it out at https://github.com/airtasker/spot. Feel free to send us feedback in GitHub issues or replying to this comment :)
I prefer to write them by hand. Most APIs, to start, don't have a lot to them. They tend to grow in scope over time. So, it's pretty easy to just throw together your initial idea, and incrementally grow it from there.
I might think differently if confronted with a huge API surface area to build off the bat, but I haven't run into that yet.
Manually. However, I have had the luxury of implementing relatively small APIs. If I was doing something like the Google APIs, I'd probably consider automation. That said, I'd probably want to write the automation, myself, as I'm an inveterate control freak.
For HTTP APIs, I'm a full convert to OpenAPI - write your API document by hand, then code-gen the client/server stubs.
It requires a small investment upfront, but will pay huge dividends once your project is rolling. You have a single source of truth for publicly exposed endpoints and model descriptions (your API document), and you can instantly regenerate certain key components (e.g. model binding, new routes, etc) whenever that document changes.
Yeah a couple years ago we switched from using vanilla Flask to Connexion which lets you describe your API through the an OpenAPI spec. Connexion handles routing and request validation and our developers can just import the yaml into Postman for testing as well as use Redoc for generating pretty documentation sites. Overall the biggest pain point as others have mentioned is writing and maintaining the spec. OpenApi's structure can take some time to get used to and maintaining the whole API in one file is a little tough, but it's not unmanageable with code folding and good schema definitions.
I used Silex for a long time and when it got deprecated moved over to Symfony's MicroKernel (https://symfony.com/doc/current/configuration/micro_kernel_t...). Tiny enough to get started in a matter of minutes and when your project grows bigger then you can easily refactor either the whole project or just parts of it to "standard" Symfony architecture.
1) write code, generate Swagger/OpenAPI from it. Works pretty good with big frameworks like Spring for Java or Symfony for PHP. Drawback: it is too easy to change API, tends to broke BC too often.
2) write Swagger/OpenAPI, generate code stubs from it. Works good enough with Go and TypeScript. Tends to keep client-server contracts stable. Drawback: server code is overly complicated, needs extra layer of DTO to convert from domain terms to API models.
I'm using a custom protocol on top of MQTT. I have a big CSV file with all the topics/payload types/etc. specified which is then use to generate a common library for our software services.
Thanks to Rust's nice code generation capabilities, I have several types (many enums) which automatically serialize/deserialize from/to MQTT messages, checks included. Really cute.
I write API code manually and use testing tool that also generates openapi file and save it in git. This makes API docs always up-to-date and history of changes in actual API via git. (stack: rails, rspec and some gem for openapi)
I use Django REST Framework which may be or may be not an automated tool depending on the definition you are using - but DRF makes API's very declarative and I love it (batteries included).
most languages have a library that takes a json structure from a file and creates an AP. for eg json-server on node.js, I just use that initially until the "need" for the db becomes clear ie what data do I need to interact with. After that it's custom all the way - it's more malleable I find
I feel the opposite: automated tools are really useful for smallish POC type things -- MVPs and early stage work, but fail when things reach a certain level of complexity.
1. the interfaces must remain stable to the outside world that relies on them
2. They select what underlying resources and functionality is accessible by outside users, and what is hidden. A lot of your internal implementation is either a mess, “temporary”, insecure or intentionally internal.
3. They control access to the internal application through authentication, authorization, security, and translating data in both directions.
4. When the internal representation changes, they map the new implementation to the old interface to ensure the system remains reliable to API consumers.
5. They offer migration paths when change is necessary
That being said...
Auto API generators are really useful for internal systems where you control the underlying system, the API, and all systems relying on the API.
They are also useful to build an initial API that you plan to fork.
This helps elevate changes to the API itself; you can easily write automated systems which detect changes to the specific SDL files. Or, the way companies like Namely [1] do it, keep those SDLs inside a separate repo, then publish the adapter libraries on private npm/etc to be consumed by your implementation.
[1] https://medium.com/namely-labs/how-we-build-grpc-services-at...
NFS is based on this, as with other services. Conceptually it’s exactly the same, with some underlying differences.
TLS certs are encoded in ASN.1 DER, for instance, and LDAP messages are encoded in ASN.1 BER.
Thanks for the Namely case study as well. It was timely reading. :)
I like to design systems in that order :
1. my data model ( database schema ), because it gives you good questions to ask regarding the business side of your problem and let you go very deep just by asking « for each a, how many bs can we get »
2. my external api. Because it requires you to « dumb down » your problem, and see which part of your model you want to expose and how.
3. I actually start coding my business process, and then bind them to the model on one end and api on the other
If i were to use code generation tool, it would need to generate both the db and the api stubs, together with the correct information exposure. I’m not aware of any tool that would let you do that.
For OpenAPI, Connexion by Zalando is one of the best implementations I have used. You just need to write the logic and provide the API spec.
Often I have a simple problem where I can write a simple clean API quickly by hand. Generation is a negative, generated APIs tend to be complex and hard for the user to read.
Sometimes my requirements need something that a tool does better. For example protobuf gives me an efficient over the wire API that can be used in multiple languages: I'll let protobuf generate those APIs as I can't do better by hand (though we can debate which tool is better for ages).
Sometimes I have a complex situation where I'll write my own generator. For example I once made a unit system generator for C++: it was able to multiply light-years by seconds and convert to miles/fortnight - no way would a handwritten API support all the code needed for that but with generation it was automatic (why you would want to do the above is an exercise for the reader). The API was easier to understand than boost's unit system (APIs are about compromises so I won't claim mine is better)
Basically you create your tables and run PostGrest. Bam! You have an http interface / api for your database. We would then create light wrappers around those that took on specific responsibilities - security, audit etc. The wrapped apis is what we exposed publicly.
This may not sound all that helpful but it made the bit we implemented unbelievably tiny. As a plus, we found that a Java application that exposes an endpoint and calls an endpoint is fast to start / stop because it doesn't mess around with DB connection pools.
[1] http://postgrest.org/en/v5.2/
The workflow now is:
- think really hard for 10 minutes about the business problem,
- describe it into our meta language (typed structs, UML-like, really simple),
- instantly click'n'build a whole set of API endpoints down to SQL create/alter/drop statements, along with full up to date documentation,
- get excited to be able to deliver so much stuff to customer in no time,
- aaand finally receive a requirement update ('the last one I promise') and send I-love-you letters back in time to our old-selves for such a nice malleable framework (which I dubbed The Platform).
If I was to start a new API today I'd use Hasura. It automatically creates a GraphQL schema/API from a Postgres database. It's an amazing tool.
https://hasura.io/
Looks really interesting to easily layer a graphQL API on top of a Rails app with a few serverless functions...
I'll expand a bit on my previous comment.
So the idea is that Hasura is a stateless layer on top of Postgres that generates all the necessary GraphQL schema/queries/mutations/real time subscriptions for doing CRUD based on the Postgres schema. If you change the tables (either via the Hasura admin or some migration system) it all adapts automatically as you'd expect. It can use a remote Postgres DB, you don't need to run the API and DB in the same machine.
Performance is fantastic. Hasura is very efficient in terms of speed and memory consumption. Even with a free Heroku dyno you should get thousands of reqs/s.
On top of direct data from tables you can also read Postgres views. Essentially you can read a custom SQL query from GraphQL.
Hasura can also integrate external GraphQL schemas via a mechanism it calls "stitching". The idea is that you can point remote GraphQL schemas to Hasura (on top of the current one from Postgres) and it will serve as a gateway of sorts between all your GraphQL clients and servers.
Hasura does not include authentication, but it's very easy to integrate with your current system or with services like Auth0 via JWT.
Hasura also includes a powerful fine grained role-based authorization system.
Whenever anything happens you can configure Hasura to call a URL (webhook) to do something. Maybe a REST endpoint or a cloud function. This is usually the way to integrate server side logic.
The only problem we've found is integrating Hasura with our current authorization system. Our users have multiple roles and we have no way of deciding which is the current role. Hasura requires a single role to be passed to its authorization system on the request headers. This is something that is being worked on AFAIK.
Their youtube channel has lots of little videos showcasing all the functionality.
https://www.youtube.com/channel/UCZo1ciR8pZvdD3Wxp9aSNhQ/vid...
We reduced memory usage by 80% over Node. We never had a performance bottleneck with Node either but it feels nice to be running on the smallest Heroku dyno and knowing you won't need much more for at least a couple of years.
As for the developer experience we vastly prefer Go over JavaScript. It's more tedious at times but there is no more ambiguity. We love that we barely need any dependencies. Moving from JS to Go was extremely easy as all devs in our team are polyglots and Go is pretty simple. I don't know how easy it would be for a JS only dev, but I imagine it wouldn't be too hard.
When using NPM/Node/JavaScript it seems there are always hidden dangers, probably more in the front end than when doing backend Node. With Go there are no surprises, everything feels solid and predictable.
After about 2 years with Go we are still happy with the decision.
It doesn't make sense to send SOAP messages to browsers but I cringe every time I find myself with a vaguely documented Rest API when integrating systems.
I think the key is to pick a documentation tool that the team will actually use.
That being said, I just finished a school project where we (our class) were divided into small teams and we had to implement small RESTful web apps. My team chose to kick it off by grabbing two people from the front- and backend team and writing an API specification by hand. It was a breeze and we were done in a few hours. After that front- and backend (almost) never had to interact with each other again until the end of the project where we had to stick the two things together.
This probably isn't applicable to real-world cases where the requirements are ever-changing and everyone's a full-stack dev (or you don't have a team at all) but I found this sort of separation quite useful for this project. (It kept team sizes managable, different kinds of devs were in seperate teams, we didn't have to wrestle with any tooling that would halt the whole project.)
I see no problem with generating client/server boilerplate from spec though (like Swagger does, I think).
1) Client code in various langs. 2) Server code in golang of python or nodejs. 3) Swagger. 4) Rest interface if you want to. 5) Gorm definition of you use golang gorm.
We've had to drop-in-replace, or add a validation or access layer service for something, and using protobuf has made this super easy. Anything interacting with that service is none the wiser.
I've been playing the FoundationDB Record Layer for a personal project of mine, and with this setup I can generate not only the API implementation, but also the models used by the persistence layer:
Protobuf (Messages) -> gRPC -> Scala/Monix -> Protobuf (Models) -> FoundationDB
Sounds really cool! Is this something that comes out of the box or generated by your own plugins?
Having used loopback before, it's a quick way to get an api up and running, I personally struggle with injecting logic into endpoints/writing custom endpoints.
If the code's "all there", I know where to look. If I have to intercept hooks it adds an extra layer when searching.
Summary, loopback has been great for creating APIs where all I care about is crud, but for larger projects I stick with snippets/generators so I can extend easier later.
I typically have a repository generated by Spring Data, a small service layer with business logic on top of those and then an MVC controller that only talks to the service layer, never the repositories.
Each controller also has its own DTO class(es) for request bodies and responses and a small converter between DTO and entity. Kotlin extension methods make it easy to add the toDto() method onto the entity so a typical controller will fetch the entity from the service and return entity.toDto().
Kotlin, Spring Boot and Spring Data are amazingly well suited for this.
Also, you could use projections in place of DTO’s.
You don't really need DTOs because you can use projections and set a default projection to be used when that entity type is returned in a collection. Any entity fields that should never be exposed can be annotated with @JsonIgnore. And then if you need endpoints that aren't CRUD, you can build those the usual way.
For professional stuff... it really depends. I like GRPC but codegen needs team buy-in... It can quickly make a fast development loop hurt if done poorly. Doubly so if IDEs are involved for some users and the IDE is constantly updating it's caches of types and interfaces. I've just seen it turn into a hot frustrating mess very quickly.
This didn't pan out well, because it turns out OpenAPI isn't very easy to read, especially when you're reviewing a diff in a pull request. We didn't get the engagement we were looking for in pull requests.
We've since invested in building a simpler, human-friendly API description language based on TypeScript, which exports to OpenAPI 3. It's still early, but we've got a lot of positive feedback and quick adoption across the company (50 engineers).
You can check it out at https://github.com/airtasker/spot. Feel free to send us feedback in GitHub issues or replying to this comment :)
I might think differently if confronted with a huge API surface area to build off the bat, but I haven't run into that yet.
It requires a small investment upfront, but will pay huge dividends once your project is rolling. You have a single source of truth for publicly exposed endpoints and model descriptions (your API document), and you can instantly regenerate certain key components (e.g. model binding, new routes, etc) whenever that document changes.
I actually contributed the F#/Giraffe generator to the OpenAPI generator project, which you can find at https://github.com/OpenAPITools/openapi-generator
What's important is that you have rigorous testing around your API.
APIs are essentially external contracts people build against. You don't want to break this contract.
make sure it
- never changes unless you know about it
- updates the documentation whenever it changes
1) write code, generate Swagger/OpenAPI from it. Works pretty good with big frameworks like Spring for Java or Symfony for PHP. Drawback: it is too easy to change API, tends to broke BC too often.
2) write Swagger/OpenAPI, generate code stubs from it. Works good enough with Go and TypeScript. Tends to keep client-server contracts stable. Drawback: server code is overly complicated, needs extra layer of DTO to convert from domain terms to API models.
edit: 2nd approach also good for autotesting.
anyone still using CORBA? or implementing new projects with CORBA?
(i) TCP/IP
(ii) HTTP
(iii) ASN.1
(iv) SQL
(v) The key-value session state store I wrote for my Web site (cheap, simple, quick, dirty version of Redis).
Etc.
Now, how can the design and programming of such APIs be "automated"????
[1] https://feathersjs.com/
I think early stage and MVP projects are almost always written by hand.