auto1111 has a one click installer and these extensions can be installed by going to the "extensions" tab and pasting the github URL into the "install from URL" box. auto1111 available here https://github.com/AUTOMATIC1111/stable-diffusion-webui
If I recall correctly, the difference between the SAM models is just a parameter number versus accuracy tradeoff. I have the parameter numbers listed under 'Installation', but the relative quality of the models would be task-dependent and subjective.
I would think that part of the motivation for releasing the smaller models in addition to the larger ones would be use in video image segmentation and mobile filters. The smaller models might actually be more fit for purpose with regard with regard to those applications than the biggest one. However, I'd reccommend the biggest model (vit_h) for desktop or laptop image processing.
>Records not associated with an account are deleted or anonymized within a year of creation. Image processing records associated with an account are retained indefinitely.
>We retain them on your behalf so that you can view and download them as you wish, and to provide customer support.
> Right now, we retain images and results for five days after they are uploaded, after which they are permanently deleted. Please note that our data retention policies may change over time, and this current policy does not bind us in the future, or require your affirmative consent to change.
(I'm a dev on the project.) The privacy policy is an old and generic one that we use across a bunch of sites. It should be updated. Our retention policy on this site is as stated on the front page FAQ. After five days, the records are deleted.
This is neat and glad to see more work in this area.
My issue is that it’s hard to buy credits for something specific like this, especially when my phone does it for free. So it’s tough to compete with Apple.
I hope more work in this area gets us closer to local ai that can do this without needing a service as I would gladly pay $10 one time (or sponsor an oss dev) to be able to do this for the rest of my life.
I’ve really enjoyed stablediffusion run locally. Even on my crappy machine, it’s nice to be able to not worry about credits and there’s no ticking clock impacting my exploration.
I rather like the fremium model, where basic use is free and you pay for the API or high-res. 2MP isn't bad, that's about 1400x1400, but I'd like to see 1500 ot 2000, personally. I think they'll have a tough time transitioning to paid-only.
Credits are annoying, but no one has really cracked micropayments--it's too expensive to take $1 payments.
I like how replicate does it, just take a card and bill for usage, then you can decide to just comp it or defer the charges when usage is low.
Going local is fun and practical for hobby use, but for business use an API makes more sense. Let someone else deal with hardware.
I understand why these are SaaS offerings from a business POV, but don’t understand why there aren’t more options to run locally. Are gaming GPUs like a 3080 not powerful enough?
SaaS makes more money and fits into the 4HourWorkWeek “make recurring revenue from people” playbook.
Building a thing and selling it seems like it will make less money to me.
It’s funny that on a larger scale I used to hate enterprise software. It took a year to install and had to be patched and run servers and stuff. But you paid a big amount and then like 10-20%/year for maintenance and that’s it. So many things are saas and cost $500-100+/user/year and it’s not just the cost but it’s the planning and gatekeeping. Making it available to more users can be expensive. I kind miss the simplicity of budgeting $1M and being done. Now each year it’s figuring out who really needs it and cleaning up expired accounts and being stingy on if another team can use it or not.
One of my favorite things was how easy it was to scale and share with new users.
> Are gaming GPUs like a 3080 not powerful enough?
It really depends on the model. Just cherry-picking memory as a capacity dimension first: The SAM model from Meta ships at around 2.4GB w/ 360 million parameters. That trained model fits just fine on a 12GB 3080 Ti. How fast it can compute predictions on a single 3080 Ti is a different story, in the case of SAM it does well, but this ultimately depends on how complex the given model is (not the only variable, but a big one).
> don’t understand why there aren’t more options to run locally
I think it's likely that you haven't been looking in the right places for local solutions. The deep learning space is very well represented in open source at the moment across a wide set of verticals: language models, computer vision, speech recognition, voice synthesis, etc. You don't always get the white glove UX that SaaS sometimes can offer, but thats true of much of the rest of the OSS world as well.
EDIT: Wanted to note that I use both a 3080 Ti and my M2 Max for a variety of DL tasks (both for training and inference).
(I'm a dev on the project.) We've not decided on the exact term of the credits, but they will be long-lasting, so you can pay $5 for 250 images and use that over the course of a few years. We'd make them non-expiring, but that creates an unbounded liability.
Thanks. I appreciate unexpiring credits and think that’s a super reasonable price.
Again, thanks for your work. I don’t want to criticize and am glad you built this. I just like to voice this opinion in case it helps, in any small way, to increase the odds of more local software.
Something like that seems eminently reasonable. Low dollar amount for enough uses that I don't need to think too much every time I press the execute button. Reasonable expiration window. No subscription which I generally prefer. (Though I'd note that Photoshop is getting very close to doing this sort of thing and a Photoshop + Lightroom subscription is actually pretty reasonable--$20/mo--if you use them a lot. That's the sort of price point that a lot of standalone generative AI tools are going to be up against.)
Generally things with "Lifetime Guarantee" means "Lifetime of the company" or even "Lifetime of the specified product line/family/version" if you look at the fine print.
>Philosophy
>While AI is rapidly transforming the way we as a society do business, AI itself is changing even faster. What was >cutting edge only a few years ago is now rapidly becoming commoditized.
>We choose to accept and accelerate this reality.
>We therefore see ourselves less as a tech startup, and more as an outsourced MLOps extension to your engineering team. >Our goal is to be more 'S3' and less 'Adobe' for state-of-the-art AI image processing.
Accuracy is awesome. Tried with a complex thing: A photo of my son sitting on a tree. While others (pixian.ai) remove his legs and let a tree branch there instead, photoroom removed everything from the photo except the person.
I gave it a shot, but it loads forever after I dragged an image in. Up investigation, it sends a POST request (with the image likely) but got HTTP 423.
I also agree it is becoming a commodity, I even made an open source tool few years ago to remove the background from images and videos https://github.com/nadermx/backgroundremover
Hi Matthieu, I would love to try your API, but I don't want to have to go through the "sign in with Google to get free credits" maze. I have a lightroom account and I want to pay straight up, with my existing account, and leave Google and free credits out of it.
How can I do that?
It does a good enough job for studio photography with clear subjects and plain backgrounds. But fails miserably if the background is multicolour and/or contain too many objects.
The problem I have with these is that they do 98% of the removal well, but flub the last 2%. This has been the case with every one of the online and app flavors I have used (note I refuse to subscribe to Adobe.)
My use case is mineral photos, as it turns out. And I would be very surprised if the AI had been trained on these. The sad thing is -- mineral photo backgrounds tend to be very simple and smoothly-varying. Should be a slam dunk. Ah well.
A one-shot background remover doesn't give you the opportunity to interact with it and suggest that it got things wrong here and there.
Yes, I tried one of my mineral photos, and the app made several errors that it shouldn't have, as the foreground was clearly distinct from the background.
If you don't mind a bit of manual effort, Photopea's magic cut is pretty good for this kind of scenario. It is similar to the AI tools except you can designate what to lose or keep by highlighting sections.
(I'm a dev on the project.) We have another background-removal service, ClippingMagic.com, that is built around an editor to let you fix the errors in the automatic result. You may want to give it a try for your mineral photos.
Smooth background is likely worse because it carries not much background semantic. A wood table might do better there. Anyway the white box approach does so much more to item display photos, including all around soft lighting, background removal ai still are quite a way from those.
As someone who is actively working on using SAM, I would say that it leaves a weird border (2-5 pixels wide) of the cut out object so depending on the tasks, it may or may not be suitable. And in the demo, the server returned a way better image embedding than the open source one from Vit_h. There are a lot of issues on github talking about this. So take what you see on the demo page with a grain of salt.
iOS photos has this built-in AI thing where you double tap a photo and it separates the subject into a png with alpha. Imgur made available only a very low res version of your shot but still this is what I got, seems pretty good actually: https://wormhole.app/mbrY3#ab0qbAz7p26SYR0CEONDpQ
Nice work if you can get it, but fine-grained "services" like this feel dead in the water, since you're one motivated developer away from e.g. GIMP/Krita plugin, no?
1. For "objects" and "artwork" it says that quality is above 100% (108% and 119% respectively), which is weird and doesn't inspire confidence? It's also unclear generally what those percentages mean.
2. When trying this from work it says "Unable to connect to the worker. Is your firewall or proxy blocking WebSockets?" -- it's possible that the firewall is the culprit but there should be a workaround? (All methods give the same result, drag'n drop, ctrl+v, or picking from the explorer).
You can click through to see the report. Above 100% means they performed better than the competition, not they handled every picture well.
> Pixian.AI had 241 images that were rated good, or not rated and identical. That's 87.0% of your 277 images.
> The competitor had 201 images that were rated good, or not rated and identical. That's 72.6% of your 277 images.
> Pixian.AI achieved 241 / 201 = 119.9% of the competitor's performance
> The report is based on a set of user-provided images. We then reviewed the services' respective results separately and rated them as either good or bad. The comparison was done in a blind manner, without labels indicating which result was Pixian.AI's and which was the competitor's. We then tallied up the results and produced this report.
This falls victim to the inability to do microtransactions online. I love the philosophy and reasoning for starting yet another background removal service, but as someone that uses similar tools very sporadically, the $5 minimum for the full-resolution is a hefty fee when I'll only use 2-3 of the 250 images I'm purchasing.
If I could pay for 2-3 images, I'd gladly pay 5x the price per image.
Have you considered letting users accumulate a minimum bill over time before charging their credit card?
"We intend to offer a free tier for low-volume users."
I would expect that users using below a certain amount, like $3, probably do not need to pay at all.
Not at all affiliated. I have used these kind of APIs for ecommerce for about +15 years and are always looking for the best quality/cost ratio and this one really impressed.
Which model are you using? There are a bunch of different background removal models, many with configuration options, but most of the services only provide one and without configuration. I need to remove backgrounds for my ecommerce business, and the results vary widely between models and confuguring alpha-matting can make a difference too. So I've been developing a tool that has all the models in one place, along with upcaling, enhancing, and inpainting models. It spins up Vultr GPU instances on demand but that's kind of slow so I'm also hitting API's, like replicate, huggingface, and runpod. I will integrate yours too.
For background removal, I get good results with isnet-general-use and u2net, available through rembg or huggingface. I've also been getting decent results with DIS-v1 on replicate.
The results vary so widely, especially if there are blurry or light areas, it's necessary to have options. It can also be very helpful to do preprocessing image enhancement, to remove blur or upscale, prior to the background removal. I'm sure you could even take the alpha mask from the enhanced image and use it on the original image, to help in cases where the source image has issues.
There also needs to exist a service for interactive background removal, via automatic and/or interactive segmenting. Sometimes the models need a little help, and I think it's rediculous that I still have to trace paths when the models fail.
Anyway, I love the idea and pricing model, will def try it out, but I'd like to see more details on the models being used, and I'd like to see more options and configuration.
This looks like a fun startup, I've thought of doing something similar. There's a lot of room to grow with other AI image manipulation models, not just for background removal. Shoot me an email if you would like to discuss.
We'll likely add complementary AI models (e.g. super-resolution, stable diffusion, etc), with the broad bet being that businesses definitely don't want to run their own service, and would prefer off-the-shelf to custom models (for which there's a bunch of hosting options).
For background removal specifically, all models will inevitably have some failure rate. https://clippingmagic.com is the only one with a serious editor that enables you to get exactly the result you want on any image (it's our legacy service with "old-school" SaaS pricing).
I signed up and used it via API. Decent results and FAST! I'll keep using it via API. Looking forward to seeing what else you guys come up with. Thanks!
I think that broad bet is a good one. Simple API endpoints with a good selection of curated models will certainly be a hit. There are lots of options for hosting, and quite a few API providers, but they're all some combination of overly complicated, slow, brittle, or functionally limited.
Without annoying subscriptions "yet". The "free while in beta" statement and the title of this submission contradict themselves.
I'm also curious what exactly is your goal with this project? At the very least there are 20 or more businesses that do the exact same thing. I think marketing yourself in this space is going to be an absolute nightmare unless you have a massive marketing budget to game Google.
If you want goodwill then you'll need to cough up 10 removals per day for free users and then focus on business customers. But I don't think you'll be able to keep up because open-source already offers this service and it won't be long before an enterprise solution pops up on GitHub too.
(I'm a dev.) Not everyone has macOS, and running a DL model in your browser is not exactly mainstream. Which is why segment leader remove.bg gets an estimated 35M in monthly organic traffic. Also, we believe our results are significantly better than those offered by the open-source models we've seen.
I appreciate the free tier with resolution-limited downloads. And you've priced the middle tier right, because I wouldn't hesitate to drop five bucks even if I didn't know how long it would take me to churn through all 250 redemptions.
This is interesting. I signed up and will give the API a try.
Just so you know, you being "free while in beta" is concerning to me as a business customer. I would be much more enthusiastic and reassured if I could sign up on a pay-as-you-go basis right now.
The fact that it's free and you intend to monetize sometime in the future sends the signal that there's a large chance this service will be gone in a few weeks/months, which is scarier than being charged now, so we have to hedge our bet and try out competitor APIs too, just in case.
I'd rather support your startup and be on a paid plan right away.
sending your photos to your iphone, tapping and holding on the object, and taping the share option that pops up over the foreground item, then choosing save image. then you can send that new image with the background removed back to your linux system.
How to send and receive image between linux and iphone is left as an exercise for the reader.
Several hundreds or possibly several thousands of lines of python code (I just checked main.py and one from a subfolder) doesn't sound trivial to me.
That said, I am not negative, only pointing out that at least for me (as a admittedly non-native speaker,) trivial is not a word I would use to describe it.
The results are very good, and this is not surprising considering the company behind the product. What surprises me is why the creator of ClippingMagic justifies himself for the creation of the tool (PAYGO instead of subscription) and creates a new competition when like remove.bg and other competitors they only offer subscription for ClippingMagic.
Anyway, it's a good news to have finally a quality tool at low price... hoping that once they won the market, they won't increase the prices.
(I'm a dev on the project.) We have no plans to jack up prices. We see the whole market moving to low-margin cost+ pricing and we want to lead rather than follow. If we raised prices later, we'd only expose ourselves to being disrupted in the way that we are hoping to disrupt right now. Low margin plays are all about operational efficiencies, so that we can turn a profit at price points that other providers cannot. That is our laser focus, which is why our processing time is so quick.
Clipping Magic is a totally different product. It is editor-based with a bunch of post-clip effects and features. The editor allows you to fix errors in a way that a single-shot DL-based solution simply does not. We don't actually see the services as competing with each other, since DL-based solutions have taken over the portion of the market where 80-95% success rate and some errors are ok, so long as it is fast and cheap.
The main challenge and deficiency with this and other services like this is that they assume some content type to be foreground - here it is likely "portrait" style people - it's hard to say as this information isn't included in the UX. Does it handle other foregrounds, such as cars? Who knows.
A better solution is to have controls, at least in optional manner, which the user can use to scribble examples of the foreground(s) and backgrounds.
I like this, tried it out and it performed pretty well. I signed up. It says in the pricing section that 2Mpx images are free, although my test image was larger than that.
My account at Cedar Lake Ventures does not link back to Pixian, and there doesn't seem to be any way to enter payment information on either site.
(I'm a dev on the project.) It is free while in beta. We've not implemented pricing yet, as we are a small team and just shipped the latest version of the model.
I love the tool. Would you be able to add background blending feature? Basically the use case I thinking of replacing the background by merging the two photos, where main photo is subject and the secondary photo is background. Doing blending in Photoshop is such a pain.
I will give this a try, but I’ve been happy to shell out for a few credits on remove.bg when intricate hair strands were involved [in isolating portraits]. Whatever they do just works.
Most images have excessive margins around the foreground. By allowing you to crop it instead of just shrinking it you get an effectively higher resolution result.
If you don't want to manually crop it, just press "ok".
That said we could probably improve the messaging in that dialog, thanks!
I am away from the computer so I don’t recall the exact menu. But it’s easily accessible in a submenu. If I recall right, the menu item is called “Remove background”.
pretty good! will use it to generate WhatsApp stickers. perhaps you could use Enterprise sales as a way to subsidize consumers with 1 use/year needs? :)
Problem is there are so many cheap companies out there it seems you can easily run yourself into a "sybil resource exhaustion attack" by employees working their way around the "free for consumers paid for businesses" rule.
auto1111 has a one click installer and these extensions can be installed by going to the "extensions" tab and pasting the github URL into the "install from URL" box. auto1111 available here https://github.com/AUTOMATIC1111/stable-diffusion-webui
It runs from a python script.
Some explanations on the differences of these models would be nice for a noob.
I would think that part of the motivation for releasing the smaller models in addition to the larger ones would be use in video image segmentation and mobile filters. The smaller models might actually be more fit for purpose with regard with regard to those applications than the biggest one. However, I'd reccommend the biggest model (vit_h) for desktop or laptop image processing.
Surprisingly (?) my convoluted setup is slowly becoming this actually useful toolkit for various tasks I sometimes need to do.
>Records not associated with an account are deleted or anonymized within a year of creation. Image processing records associated with an account are retained indefinitely.
>We retain them on your behalf so that you can view and download them as you wish, and to provide customer support.
Thank you, but no thank you.
https://pixian.ai/remove-image-backgrounds says:
> Right now, we retain images and results for five days after they are uploaded, after which they are permanently deleted. Please note that our data retention policies may change over time, and this current policy does not bind us in the future, or require your affirmative consent to change.
https://pixian.ai/policies/terms says:
> User Submissions and any associated Results will expire 2 weeks from the point of upload.
> The Service may provide users with the option to delete User Submissions to have them expire before their normal expiration time.
> Expired User Submissions and any associated Results are subject to deletion or retention at the Company's sole discretion.
https://pixian.ai/policies/privacy
My issue is that it’s hard to buy credits for something specific like this, especially when my phone does it for free. So it’s tough to compete with Apple.
I hope more work in this area gets us closer to local ai that can do this without needing a service as I would gladly pay $10 one time (or sponsor an oss dev) to be able to do this for the rest of my life.
I’ve really enjoyed stablediffusion run locally. Even on my crappy machine, it’s nice to be able to not worry about credits and there’s no ticking clock impacting my exploration.
Credits are annoying, but no one has really cracked micropayments--it's too expensive to take $1 payments.
I like how replicate does it, just take a card and bill for usage, then you can decide to just comp it or defer the charges when usage is low.
Going local is fun and practical for hobby use, but for business use an API makes more sense. Let someone else deal with hardware.
I understand why these are SaaS offerings from a business POV, but don’t understand why there aren’t more options to run locally. Are gaming GPUs like a 3080 not powerful enough?
Building a thing and selling it seems like it will make less money to me.
It’s funny that on a larger scale I used to hate enterprise software. It took a year to install and had to be patched and run servers and stuff. But you paid a big amount and then like 10-20%/year for maintenance and that’s it. So many things are saas and cost $500-100+/user/year and it’s not just the cost but it’s the planning and gatekeeping. Making it available to more users can be expensive. I kind miss the simplicity of budgeting $1M and being done. Now each year it’s figuring out who really needs it and cleaning up expired accounts and being stingy on if another team can use it or not.
One of my favorite things was how easy it was to scale and share with new users.
It really depends on the model. Just cherry-picking memory as a capacity dimension first: The SAM model from Meta ships at around 2.4GB w/ 360 million parameters. That trained model fits just fine on a 12GB 3080 Ti. How fast it can compute predictions on a single 3080 Ti is a different story, in the case of SAM it does well, but this ultimately depends on how complex the given model is (not the only variable, but a big one).
> don’t understand why there aren’t more options to run locally
I think it's likely that you haven't been looking in the right places for local solutions. The deep learning space is very well represented in open source at the moment across a wide set of verticals: language models, computer vision, speech recognition, voice synthesis, etc. You don't always get the white glove UX that SaaS sometimes can offer, but thats true of much of the rest of the OSS world as well.
EDIT: Wanted to note that I use both a 3080 Ti and my M2 Max for a variety of DL tasks (both for training and inference).
Again, thanks for your work. I don’t want to criticize and am glad you built this. I just like to voice this opinion in case it helps, in any small way, to increase the odds of more local software.
If I can upvote twice, I would.
>Philosophy >While AI is rapidly transforming the way we as a society do business, AI itself is changing even faster. What was >cutting edge only a few years ago is now rapidly becoming commoditized.
>We choose to accept and accelerate this reality.
>We therefore see ourselves less as a tech startup, and more as an outsourced MLOps extension to your engineering team. >Our goal is to be more 'S3' and less 'Adobe' for state-of-the-art AI image processing.
Well done and well said.
Wish many hearths won
Photoroom users: feel free to use https://pixian.ai/comparisons to compare the results.
(We haven't done a comparison with them so genuinely don't know if they're better or worse)
https://gist.github.com/mvsantos/5554663
It does a good enough job for studio photography with clear subjects and plain backgrounds. But fails miserably if the background is multicolour and/or contain too many objects.
My use case is mineral photos, as it turns out. And I would be very surprised if the AI had been trained on these. The sad thing is -- mineral photo backgrounds tend to be very simple and smoothly-varying. Should be a slam dunk. Ah well.
A one-shot background remover doesn't give you the opportunity to interact with it and suggest that it got things wrong here and there.
Yes, I tried one of my mineral photos, and the app made several errors that it shouldn't have, as the foreground was clearly distinct from the background.
I don't know if I'm allowed to post a photo link, but here it is: https://imgur.com/a/V9H1pRH .
https://www.photopea.com/tuts/magic-cut-remove-image-backgro...
https://segment-anything.com/demo#
It worked fine on your example with a couple of clicks.
(also, cool shot!)
1. For "objects" and "artwork" it says that quality is above 100% (108% and 119% respectively), which is weird and doesn't inspire confidence? It's also unclear generally what those percentages mean.
2. When trying this from work it says "Unable to connect to the worker. Is your firewall or proxy blocking WebSockets?" -- it's possible that the firewall is the culprit but there should be a workaround? (All methods give the same result, drag'n drop, ctrl+v, or picking from the explorer).
> Pixian.AI had 241 images that were rated good, or not rated and identical. That's 87.0% of your 277 images.
> The competitor had 201 images that were rated good, or not rated and identical. That's 72.6% of your 277 images.
> Pixian.AI achieved 241 / 201 = 119.9% of the competitor's performance
> The report is based on a set of user-provided images. We then reviewed the services' respective results separately and rated them as either good or bad. The comparison was done in a blind manner, without labels indicating which result was Pixian.AI's and which was the competitor's. We then tallied up the results and produced this report.
https://pixian.ai/comparisons/cwsslt8d78zl7vq/share/5e51cd3d...
If I could pay for 2-3 images, I'd gladly pay 5x the price per image.
Have you considered letting users accumulate a minimum bill over time before charging their credit card?
Even a pre-trained model and self-hosted API like RemBG[0] performs a lot worse than this "pixian" service does.
https://huggingface.co/spaces/KenjieDec/RemBG [0]
All in all, I'd say they put a LOT of effort into scaling (it's FAST) and the models (they're extremely accurate).
For background removal, I get good results with isnet-general-use and u2net, available through rembg or huggingface. I've also been getting decent results with DIS-v1 on replicate.
The results vary so widely, especially if there are blurry or light areas, it's necessary to have options. It can also be very helpful to do preprocessing image enhancement, to remove blur or upscale, prior to the background removal. I'm sure you could even take the alpha mask from the enhanced image and use it on the original image, to help in cases where the source image has issues.
There also needs to exist a service for interactive background removal, via automatic and/or interactive segmenting. Sometimes the models need a little help, and I think it's rediculous that I still have to trace paths when the models fail.
Anyway, I love the idea and pricing model, will def try it out, but I'd like to see more details on the models being used, and I'd like to see more options and configuration.
This looks like a fun startup, I've thought of doing something similar. There's a lot of room to grow with other AI image manipulation models, not just for background removal. Shoot me an email if you would like to discuss.
For background removal specifically, all models will inevitably have some failure rate. https://clippingmagic.com is the only one with a serious editor that enables you to get exactly the result you want on any image (it's our legacy service with "old-school" SaaS pricing).
I think that broad bet is a good one. Simple API endpoints with a good selection of curated models will certainly be a hit. There are lots of options for hosting, and quite a few API providers, but they're all some combination of overly complicated, slow, brittle, or functionally limited.
I'm also curious what exactly is your goal with this project? At the very least there are 20 or more businesses that do the exact same thing. I think marketing yourself in this space is going to be an absolute nightmare unless you have a massive marketing budget to game Google.
If you want goodwill then you'll need to cough up 10 removals per day for free users and then focus on business customers. But I don't think you'll be able to keep up because open-source already offers this service and it won't be long before an enterprise solution pops up on GitHub too.
Yes, there's a lot more than 20 competitors. Our goal is to hit an attractive point on the quality-price pareto frontier that they just can't match.
That said, marketing will definitely be a challenge ;)
Just so you know, you being "free while in beta" is concerning to me as a business customer. I would be much more enthusiastic and reassured if I could sign up on a pay-as-you-go basis right now.
The fact that it's free and you intend to monetize sometime in the future sends the signal that there's a large chance this service will be gone in a few weeks/months, which is scarier than being charged now, so we have to hedge our bet and try out competitor APIs too, just in case.
I'd rather support your startup and be on a paid plan right away.
https://github.com/xuebinqin/U-2-Net
Pretty trivial thing to implement.
How to send and receive image between linux and iphone is left as an exercise for the reader.
Open source implementation of this
That said, I am not negative, only pointing out that at least for me (as a admittedly non-native speaker,) trivial is not a word I would use to describe it.
Is that intentional?
Anyway, it's a good news to have finally a quality tool at low price... hoping that once they won the market, they won't increase the prices.
Clipping Magic is a totally different product. It is editor-based with a bunch of post-clip effects and features. The editor allows you to fix errors in a way that a single-shot DL-based solution simply does not. We don't actually see the services as competing with each other, since DL-based solutions have taken over the portion of the market where 80-95% success rate and some errors are ok, so long as it is fast and cheap.
A better solution is to have controls, at least in optional manner, which the user can use to scribble examples of the foreground(s) and backgrounds.
But almost all API users and most regular users just want a result and to not have to futz with it, hence this offering.
My account at Cedar Lake Ventures does not link back to Pixian, and there doesn't seem to be any way to enter payment information on either site.
So, where and how exactly do I pay?
> Their margin is our opportunity.
A bit off topic but is that quote originally from Jeff Bezos, or did he just make it more famous?
Anytime I hear it, I immediately presume the person saying it worked at Amazon. Because I say it, and I did too.
Is free and works fine for my needs. Good luck beating free.
If you don't want to manually crop it, just press "ok".
That said we could probably improve the messaging in that dialog, thanks!
What's the best free web-based AI inpainting tool?
(...or pixian will also offer this functionality?)
- Remove background
- Replace background with white
- Upscale image to 1000px
https://pixian.ai/api#remove-background