Impressive project. I was curious how it discovers data relationships and was going to check the repo, but it looks like there's no code, only issues and releases. Is that right?
Currently, the code is not open source, but I might open-source parts of it in the future.
- Does it cost money?
The software is free to use. If there is demand, I might create a "pro" version for businesses in the future. However, I intend to always have a free version available for individuals.
In short, the tool breaks down the data in the requests and responses into smaller parts by identifying their formats. For example, `["foo", "bar"]` would be recognized as a JSON array and broken down into the elements `"foo"` and `"bar"`. By applying this method recursively, you build a tree-like structure of the data.
If an exact match is found between data in a response from a previous request and data in a subsequent request, a correlation is detected.
Please feel free to ask if you have any more questions!
If this can save me time at work, I'd be happy to throw some money at it.
My bosses OTOH...let's just say, there's no penalty within companies for pointy haired bosses not making decisions to purchase something like this and ignoring staff.
It's a false economy but I'm tired of it and just purchase what I can afford.
This might be more useful than the OP. This thing lets you translate HAR to Swagger…
My usual process is Dev tools -> Copy as CURL -> delete unnecessary headers -> translates to requests in python (these days I just use ChatGPT) -> wrap in python sdk for managing auth etc.
The OP’s correlation features are really nice though.
Why would you use swagger/openapi? My understanding is that graphql has its own schema system that's supposed to be returned by the server when the client requests it.
The first and immediate difference for me is the ability to recall the name. I can recall Postman/Insomina fine, and now for API Parrot. I'm never going to be able to recall mitmproxy2swagger.
Ha! Nicely played. That was out of purely laziness. I don't like using one handle across sites, so I take the first 8 chars of (New-Guid).ToString() and then dump it in my password manager.
As someone who uses mitmproxy and swagger quite often, I actually think the name isn't so bad. I haven't even looked at the readme but I already know what it does, how to run it and what output to expect.
I often forget the name of things, sometimes even the big ones. GitHub search is one of the primary ways I rediscover them. "reverse-engineer API" returns mitmproxy2swagger as the third result, and this is how I found it last time I needed it.
It is a bit frustrating when a project on GitHub doesn't have good tags or searchable keywords, making it harder to find.
Very sad half the comments are asking for MacOS app.
The rise of development on MacOS for server development when the final target is Linux will cause long term harm to the newer generation of engineers
And the unreasonable hostility towards macOS will have zero affect because in the end the best product wins.
Did the rise of Windows cause long term harm to past generation of engineers? I doubt it since now Windows, which had a gigantic market share, still was forced to implement Linux "compatibility" for developers.
There are three popular operating systems for the modern developer and it's not unreasonable to ask for a build for all of them when presenting a project to a developer focused community.
The rise of a MacOS sort of monoculture certainly affected those developers that were still on Windows. It drove me off of Windows, a system that I otherwise appreciated just fine. I never cared much for MacOS, though, so I went to Linux, but there I'm also constantly feeling the pain of so many developers being on MacOS, as there's so many incompatibilities between the two. So, in the end I guess I prefer things that run everywhere, which this Parrot thing may be in reach of, it being Electron? In that sense I guess I support the ask for a MacOS version. But boy, could the MacOS crowd just stop throwing their weight around?
Edit:
Examples:
* Tools that are only available on MacOS (remember the days when tools were only available on Windows)
* I write a BASH script which then doesn't work for the MacOS coworkers
* Tools that are supposedly platform- independent have Linux-specific errors that get no love because their developers don't care about Linux
Why? I mostly code on Mac and deploy on Linux (or FreeBSD). Never really encountered a situation where programming a web app on Mac has caused issues when deploying to the server.
When you write web code you should never have to worry about that. Actually, if you write any user space code, except drivers, you shouldn't have to worry about that. If you have to worry about it, reconsider your tooling very seriously
My experience is that having a team with mixed platforms has helped reduce deployment woes, with the rare platform-specific bugs getting worked out beforehand.
Talking about rabbit-holes. I used to have prototype OS/2 PowerPC 64-bit hardware from IBM before they killed the project. I should have kept that early EFI-based system. When the EFI boot sequence would panic, you would get an error message of "Danger Will Robinson".
Or maybe some of the newer generation will take time to update Linux to be more competitive with macOS for developers. Could be a long term win for Linux fans.
I asked Framework that repeatedly, but no progress. I think they might be violating EU Regulation 2018/302, which is rather common, mostly due to ignorance. The problem is that it is rather hard to enforce such regulation to non-EU/EEA companies. You can still send your wishes to [email protected].
Update: you can buy from Norway now, but you need to get it shipped to a different country. You need to select a different country and then chose a billing address different from the shipping one. The message that the website displays on not being able to order from Norway is misleading, and it looks like no email to Norwegian customers has been sent with respect to this possibility. Not perfect, but they got better.
Not sad at all! Mac has excellent hardware, excellent reliability, excellent day to day performance. Im not a fanboy, but it won for (IMHO) clear and obvious reasons. Of course folks want a mac app. No comment on the “harm” bit.
It is always amazing to me people who will chastise people for using Macs.
It is by far the most robust hardware and 15 years later Windows laptops may finally be catching up.
My first programming job was LAMP so I had a Linux desktop and loved it. Later I got a new job that gave us laptops, but they were quite beefy.
I had a Dell laptop with an Nvidia GPU and an Intel iGPU... After updating my OS my gpu was the only way to use my laptop, which made the battery die in under an hour and of course it was much hotter.
I tried numerous driver installs, proprietary, open source, reinstall OS, different OS... Nothing got it working again on a newer version of the Linux kernel.
Went to the Apple Store bought a MBP and have never had an issue since. Not one dead laptop, in 10 years, I plug in my USB C dock and go.
2 years later, what happened to one of my coworkers? Same exact thing. He spent 3 days trying to fix it and basically had a workaround that crashed occasionally.
I get paid to produce working software not configure my OS, and people wonder why Macs are so popular?
Macbooks have been nice since M1 era, but the Intel Macbooks between years 2013-2020 were hardly robust. My partner's 2014 MBP Retina's screen plastic film started peeling off, which was a known design flaw of those models. Later the ones with butterfly keyboard were notoriously unreliable, with keys getting stuck.
Personally I haven't had much trouble with Linux on modern Thinkpads. Very little to configure manually, as long as you pick the right distro. Even a Dell laptop at work with Linux isn't causing me much OS-related issues, although battery life sucks.
Well, no. The 2015 MBP is a well known workhorse that stretched many people professionally up to the M1. I would absolutely agree that the 2016-2020 Intel MacBooks were rough though.
I agree, people don't realize the value of not depending on a single company to do their work. We can see this problem even more with LLM code generators.
Counter-argument: it could be risky to dev on and deploy to a single monoculture.
But empirically, I've been developing on macOS (etc) and Linux (often simultaneously), and deploying to Linux (Debian, RHEL/AL), Solaris (etc), and FreeBSD ... for more than 20 years.
Aside from package management tooling differences, package naming, and package content splits (e.g. pkg vs pkg-dev) -- all of which are equally inconsistent between Linux distros -- I cannot recall a single issue caused by this heterogeneity.
In the past I did a lot of successful work on iOS apps from a Windows system, thanks to Xamarin and a mac sitting on a shelf, acting as the remote system.
Also, please, remember what "cross compilation" mean.
Really? In the modern .Net world (originally .Net Core) it's very common for devs to use Windows machines to write code whose CI pipelines and deployed environments are all Linux. I've seen a handful of issues with things like path separators and file system case sensitivity, but we're talking about 3 or 4 minor problems in 6-7 years that I've been using it.
(also yes, people keep asking "what about linux" and think it's bad when you say there is literally nothing extra to consider in 95% of situations, sigh)
I'm actually going to switch to Mac as a pilot for our team at some point this year! I don't expect any issues, I already use Rider and have done plenty of .Net stuff on my personal machine which is a M3 MBP. Really IMO the only question marks will be around using Parallels when we need to occasionally work on a legacy .Net Framework app.
Love the idea. I’m always finding myself writing little user scripts / browser extensions to extend websites I use all the time, and trying to use an API I found in the devtools network requests page always gets annoying when I have to try and do anything beyond replicating the exact input/output I found in the original request.
Haven’t fully looked through the features/docs, so forgive me if my question is answered in there, but what does support look like for:
- Exporting to Swagger/OpenAPI Spec
- Exporting to generated SDK (I know some tools exist that can generate SDKs from OpenAPI/Swagger, so maybe some of these tools have licenses that are compatible with your product?)
- Support for URL path variables (e.g. `/users/{user_id}`)
- Support for URL query parameters (and filtering for common “noise” parameters, e.g. Google analytics)
- Support for non-JSON input/output (e.g. an endpoint that accepts multipart form data)
Awesome idea though. I’m definitely going to try this out. Beautiful UI and website too. I’m stoked to play around with this!
Just so you know, there is an app called Traffic Parrot (https://trafficparrot.com/). They operate on the same market, so they may not like the name you chose.
Yes, I plan to release a macOS version of API Parrot. Unfortunately, I currently don't own a Mac, and since building macOS applications requires one, this has delayed the release. I'm actively exploring solutions, such as accessing a Mac environment remotely or acquiring the necessary hardware.
This is incredible. We’ve spent ages and ages figuring out the weird internals of certain legacy systems that we’ve ended up having to use bots or RPA to integrate with. If you can polish this into a true product, we would pay for it!
I've just gone through the "Docs" section and I appreciate how it covers the intended workflow and use cases. I'm on Debian/Intel and other than the need to install Chrome I only had a few small issues.
++ A self contained appImage is a good way to go, but where do you put it? A default install location should be added for those used to an `apt install`.
I went `sudo wget $URL -C /usr/local/bin/` and `chmod +x $appimage`. This worked fine until Collection creation when some internal state change smacked into my root owned file permissions. I `chmod 777` it and restarted the app, no more issue. It's my machine and I can chmod how I want but I think doc clarity would help those unfamiliar with appimage.
++ Renaming projects, collections, etc is cumbersome. For example, when clicking the 'New Project" pencil a change name window opens with several steps needed then to rename the project. That single click could combine opening the window, that window grabbing focus, with the cursor in a blank form window, followed by 'Enter'.
++ Ability to toggle showing the Properties column. On a 14" hi rez laptop, the screen is crowded. And resize Project width.
++ The default flow view size is too small.
I hope that's helpful. A small number of UI tweaks and it's already at "Don't F*** With It!" stage. The issues above are small and don't take away from how great and EXCITED I was going through the tutorial. I went through the entire docs and the tutorial and I think it's a fine program. Your layout of the DOM response is also really nice!
One of the issues with these tools is that more and more websites now employ multiple aggressive CAPTCHAs, fingerprints, device check, etc, rendering tools like API Parrot almost useless.
Thank you for pointing this out. I've addressed the issue, and it should now be fixed in version 0.2.1, which is available for download on the website. Please update to the latest version, and let me know if you encounter any more problems.
Interesting but... The first website I've tried it (which I'm currently working on due to a change of platform) couldn't find anything other than the main request, and I know for sure there is a POST reguest to the API to get some data (I had a scrapper working, website changed, had to re-do the scrapper again).
I've checked the tutorial, seems that I'm not missing any step, the software simple cannot capture anything if the request is made on the main page, seems to work fine with forms, buttons and "manual" actions.
I can DM you the website plus the expected request that is made, visible with any browser internal debugging tools.
Nice project, I was able to use it to map out some parts of a vendor's API that's been giving me grief today. I'm pretty amateur and this was really intuitive. Happily putting this in my toolbox.
Interesting project, I've often looked for something like this but haven't found anything that does the job. I'm on a mac and can't wait to try this out. Can I ask what you're using adblock-rs for?
Glad you like the project! I'm working on getting the macOS version built and released as soon as possible. If you'd like to be notified when it's ready, you can sign up for the newsletter here: https://apiparrot.com/#newsletter.
As for adblock-rs, I'm using it to detect and automatically disable requests related to ads and other unnecessary stuff. This helps cut down on noise and saves some time for developers.
I extracted the zip, found the electron build folder, replaced a string in the minified code to launch Chrome on macOS correctly and ran electron-packager with the target being macos.
Patching electron apps is fairly common. You can take a look at Spicetify or BetterDiscord to see the process in more detail
This is pretty cool, I ran it against one of a largest customer sites and it was very interesting to see how the page all interconnects. I'm pretty sure it can be used to spot architecture/performance problems.
Lots of interesting ideas, a lot of the same methodology is used by bug bounty hunters/pentesters. It gives me some perspective to build something in my tool.
Love this. I’ve worked on a few projects in RPA prior and I’m losing faith in selectors. I think either direct data access like this or AI based CV are the automation arms of the future.
I’m not able to read what the product actually does - I keep getting distracted by the ‘snake’ animation surrounding the content .. not sure what the purpose is ;-)
interesting but not sure what the value add here is, it gives you a graph flow of all the API requests being made? and then the goal is to replay them?
aren't there github libraries that do this already?
If only there would be something with schema like XML that people would use for the APIs ;) You could generate diagrams from WSDL and even generate client code from that.
There is also bunch of JSON schema stuff nowadays.
But yeah for a lot of people schema of API contracts feels like too much work and too much hassle.
JSON serialization doesn’t throw errors for new properties quickly added on sending side and receiving side can ignore stuff - well as long as API semantics allow but that’s generally going to be a hassle always even with LLMs somehow autofixing your „schema”.
Currently only HTTP requests are supported. I might add support for websockets later, however that is a harder problem to solve due to the binary encoding etc.
looks amazing! thanks for sharing, will give it a shot in a short while. Btw, how do you keep yourself motivated on working on free projects? Obviosly it takes a lot of effort and no one is paying for that.
Working on this side project has been both fun and rewarding. I've learned a lot throughout the process, which keeps me motivated even without immediate financial gain. I have plenty of ideas on how to improve the software in various ways. Some of these enhancements could become part of a "pro" version tailored for businesses. My long-term ambition is to turn this into a full-fledged product, which would enable me to dedicate more time to its development.
Which leads me to...
- Is this closed source?
- Does it cost money?
- How does it discover data relationships?
- Is this closed source?
Currently, the code is not open source, but I might open-source parts of it in the future.
- Does it cost money?
The software is free to use. If there is demand, I might create a "pro" version for businesses in the future. However, I intend to always have a free version available for individuals.
- How does it discover data relationships?
I've discussed how it discovers data relationships in the documentation here: https://docs.apiparrot.com/docs/tutorial-extras/exchange-mod....
In short, the tool breaks down the data in the requests and responses into smaller parts by identifying their formats. For example, `["foo", "bar"]` would be recognized as a JSON array and broken down into the elements `"foo"` and `"bar"`. By applying this method recursively, you build a tree-like structure of the data.
If an exact match is found between data in a response from a previous request and data in a subsequent request, a correlation is detected.
Please feel free to ask if you have any more questions!
My bosses OTOH...let's just say, there's no penalty within companies for pointy haired bosses not making decisions to purchase something like this and ignoring staff.
It's a false economy but I'm tired of it and just purchase what I can afford.
https://github.com/alufers/mitmproxy2swagger
My usual process is Dev tools -> Copy as CURL -> delete unnecessary headers -> translates to requests in python (these days I just use ChatGPT) -> wrap in python sdk for managing auth etc.
The OP’s correlation features are really nice though.
Thank you for sharing this though, I was looking for a tool like this :)
https://graphql.org/learn/schema/
There's other tools out there that can generate similar docs or playgrounds, given you have a schema/spec of some type.
Unfortunately, names matter.
It is a bit frustrating when a project on GitHub doesn't have good tags or searchable keywords, making it harder to find.
Did the rise of Windows cause long term harm to past generation of engineers? I doubt it since now Windows, which had a gigantic market share, still was forced to implement Linux "compatibility" for developers.
There are three popular operating systems for the modern developer and it's not unreasonable to ask for a build for all of them when presenting a project to a developer focused community.
Edit: Examples:
* Tools that are only available on MacOS (remember the days when tools were only available on Windows)
* I write a BASH script which then doesn't work for the MacOS coworkers
* Tools that are supposedly platform- independent have Linux-specific errors that get no love because their developers don't care about Linux
i can imagine this happening if a team has a myriad of hardware/os flavors and different server setups.
My experience is that having a team with mixed platforms has helped reduce deployment woes, with the rare platform-specific bugs getting worked out beforehand.
Now if Framework laptops were available in Norway, I'd probably rather have that, even if they're not as powerful.
Also, depending on where you work, there might be restrictions in the choice of platform. Usually limited to Mac or Windows.
https://knowledgebase.frame.work/en_us/what-countries-and-re...
It is by far the most robust hardware and 15 years later Windows laptops may finally be catching up.
My first programming job was LAMP so I had a Linux desktop and loved it. Later I got a new job that gave us laptops, but they were quite beefy.
I had a Dell laptop with an Nvidia GPU and an Intel iGPU... After updating my OS my gpu was the only way to use my laptop, which made the battery die in under an hour and of course it was much hotter.
I tried numerous driver installs, proprietary, open source, reinstall OS, different OS... Nothing got it working again on a newer version of the Linux kernel.
Went to the Apple Store bought a MBP and have never had an issue since. Not one dead laptop, in 10 years, I plug in my USB C dock and go.
2 years later, what happened to one of my coworkers? Same exact thing. He spent 3 days trying to fix it and basically had a workaround that crashed occasionally.
I get paid to produce working software not configure my OS, and people wonder why Macs are so popular?
Personally I haven't had much trouble with Linux on modern Thinkpads. Very little to configure manually, as long as you pick the right distro. Even a Dell laptop at work with Linux isn't causing me much OS-related issues, although battery life sucks.
Unnecessary abstraction
But empirically, I've been developing on macOS (etc) and Linux (often simultaneously), and deploying to Linux (Debian, RHEL/AL), Solaris (etc), and FreeBSD ... for more than 20 years.
Aside from package management tooling differences, package naming, and package content splits (e.g. pkg vs pkg-dev) -- all of which are equally inconsistent between Linux distros -- I cannot recall a single issue caused by this heterogeneity.
I dream of the day Apple releases official docker images. Building for iOS is the only reason I have to touch a Mac.
> [...] or a iOS developer work in Linux
In the past I did a lot of successful work on iOS apps from a Windows system, thanks to Xamarin and a mac sitting on a shelf, acting as the remote system.
Also, please, remember what "cross compilation" mean.
(also yes, people keep asking "what about linux" and think it's bad when you say there is literally nothing extra to consider in 95% of situations, sigh)
Besides, most devs doing web development on Macs are also using Docker, which is always Linux.
Haven’t fully looked through the features/docs, so forgive me if my question is answered in there, but what does support look like for:
- Exporting to Swagger/OpenAPI Spec
- Exporting to generated SDK (I know some tools exist that can generate SDKs from OpenAPI/Swagger, so maybe some of these tools have licenses that are compatible with your product?)
- Support for URL path variables (e.g. `/users/{user_id}`)
- Support for URL query parameters (and filtering for common “noise” parameters, e.g. Google analytics)
- Support for non-JSON input/output (e.g. an endpoint that accepts multipart form data)
Awesome idea though. I’m definitely going to try this out. Beautiful UI and website too. I’m stoked to play around with this!
Just so you know, there is an app called Traffic Parrot (https://trafficparrot.com/). They operate on the same market, so they may not like the name you chose.
https://www.macstadium.com/
Please note that since the app isn't code-signed yet, you'll need to remove the quarantine attribute to run it. I've updated the documentation with instructions on how to do this: https://docs.apiparrot.com/docs/getting-started/download-and...
Let me know if you have any questions or run into any issues!
Any chance of a Mac version?
++ A self contained appImage is a good way to go, but where do you put it? A default install location should be added for those used to an `apt install`.
I went `sudo wget $URL -C /usr/local/bin/` and `chmod +x $appimage`. This worked fine until Collection creation when some internal state change smacked into my root owned file permissions. I `chmod 777` it and restarted the app, no more issue. It's my machine and I can chmod how I want but I think doc clarity would help those unfamiliar with appimage.
++ Renaming projects, collections, etc is cumbersome. For example, when clicking the 'New Project" pencil a change name window opens with several steps needed then to rename the project. That single click could combine opening the window, that window grabbing focus, with the cursor in a blank form window, followed by 'Enter'.
++ Ability to toggle showing the Properties column. On a 14" hi rez laptop, the screen is crowded. And resize Project width.
++ The default flow view size is too small.
I hope that's helpful. A small number of UI tweaks and it's already at "Don't F*** With It!" stage. The issues above are small and don't take away from how great and EXCITED I was going through the tutorial. I went through the entire docs and the tutorial and I think it's a fine program. Your layout of the DOM response is also really nice!
One of the issues with these tools is that more and more websites now employ multiple aggressive CAPTCHAs, fingerprints, device check, etc, rendering tools like API Parrot almost useless.
Please note that since the app isn't code-signed yet, you'll need to remove the quarantine attribute to run it. I've updated the documentation with instructions on how to do this: https://docs.apiparrot.com/docs/getting-started/download-and...
Let me know if you have any questions or run into any issues!
I've checked the tutorial, seems that I'm not missing any step, the software simple cannot capture anything if the request is made on the main page, seems to work fine with forms, buttons and "manual" actions.
I can DM you the website plus the expected request that is made, visible with any browser internal debugging tools.
> API Parrot is the tool specifically designed to reverese engineer the HTTP APIs of any website.
It should now be fixed.
As for adblock-rs, I'm using it to detect and automatically disable requests related to ads and other unnecessary stuff. This helps cut down on noise and saves some time for developers.
Patching electron apps is fairly common. You can take a look at Spicetify or BetterDiscord to see the process in more detail
I've added a newsletter sign-up form at the bottom of the webpage: https://apiparrot.com/#newsletter
Feel free to subscribe to receive notifications when we release the MacOSX version.
aren't there github libraries that do this already?
which ones?
There is also bunch of JSON schema stuff nowadays.
But yeah for a lot of people schema of API contracts feels like too much work and too much hassle.
JSON serialization doesn’t throw errors for new properties quickly added on sending side and receiving side can ignore stuff - well as long as API semantics allow but that’s generally going to be a hassle always even with LLMs somehow autofixing your „schema”.
Working on this side project has been both fun and rewarding. I've learned a lot throughout the process, which keeps me motivated even without immediate financial gain. I have plenty of ideas on how to improve the software in various ways. Some of these enhancements could become part of a "pro" version tailored for businesses. My long-term ambition is to turn this into a full-fledged product, which would enable me to dedicate more time to its development.