I'm building my own polygon modeling app for iOS as a side-project [0], so I feel a bit conflicted.
Getting fully featured Blender on the iPad will be amazing for things like Grease Pencil (drawing in 3D) or texture painting, but on the other hand my side-project just became a little bit less relevant.
I'll have to take a look at whether I can make some contributions to Blender.
There’s also the excellent Nomad Sculpt, which while not a mesh editor, is an incredibly performant digital sculpting app. Compared to Blender’s sculpt workflow it maintains a higher frame rate and smaller memory footprint with a higher vertex count. Of course it’s much more limited than Blender but its sculpting workflow is much better and then you can export to Blender.
There is room for more than one modeling app on iOS as long as you can offer something that Blender doesn’t, even if it’s just better performance.
On the contrary, your project just became even more relevant. Blender badly needs an alternative/competitor. Everybody loses if a single project dominates.
One thing Blender lacks is easy 3D texture painting. As far as I know, neither is there a decent 3D texture painting iPad app. Definitely a gap in the market.
To cheer you up, in my experience over the existence of the App Store, anytime something like this comes to the Store is a big win for independent side projects.
Your project might possibly be way cheaper and solve a specific problem, so it would benefit from the awareness that Blender's large marketing footprint would inevitably leave behind ;)
Keep building!
I haven't done much designing. Starting with a cube and sculpting, transforming seems the Blender's approach. Are there any other approaches for designing 3-D shapes and assembling them?
Basically, Blender says "start with a cube". I want to ask why and what are the other options.
Blender doesn't say "start with a cube". Its default "General" scene has a cube. You can add whatever mesh, text, curves, metaballs, grease pencil, volumes, etc you want.
As far as workflows go, there are far too many to list, and most artists use multiple, but common ones are:
* Sculpt from a base mesh, increasing resolution as you go with remeshing, subdivision, dyntopo, etc.
* Constructively model with primitives and boolean modifiers.
* Constructively model with metaballs.
* Do everything via extrusion and joining, as well as other basic face and edge-oriented operators.
* Use geometry nodes and/or shaders to programmatically end up with the result you want.
I think "hard surface modeling" is a keyword you might want to look up. Also look into traditional CAD and "parametric design". All of these have considerable overlap and are focussed on mostly "boxy shapes" with some "free form surfaces" added.
On the other end of the spectrum you have "sculping" for organic shapes and you might want dig into the community around ZBrush, if you want to fill the gap from "start with a cube" to real sculpting.
More niche, but where I feel home, is the model with coordinates and text input approach. I don't know if it has a real name, but here istwhere is pops up and where I worked with it:
- In original AutoCAD (think 90s) a lot of the heavy lifting was done on an integrated command line hacking in many coordinates
- POVRay (and its predecessors and relatives) has a really nice DSL to do CSG with algebraic hypersurfaces. Sounds way more scary than it is. Mark Shuttleworth used it on his space trip to make a render for example.
- Professionally I worked for a couple of years with a tool called FEMAP. I used a lot of coordinate entry and formulas to define stuff and FEMAP is well suited for that.
- Finally OpenSCAD for a contemporary example of the approach.
- sculpting. You start with a high density mesh and drag the points like clay.
- poly modeling. You start with a plane and extend the edges until you make the topology you want.
- box modelling. You start with a cube or other primitive shape and extend the faces until you make the shape you want.
- nurbs / patches : you create parts of a model by moving Bézier curves around and creating a surface between them
- B-reps / parametric / csg / cad / sdf / BSP : these are all similar in that you start with mathematically defined shapes and then combine or subtract them from each other to get what you want. They’re all different in implementation though
- photogrammetry / scans : you take imagery from multiple angles and correlate points between them to create a mesh
For skulpted works (best for organic and free-form style character and object designs) starting with a "clay sphere" is also common. In terms of app for the iPad: Nomad Skulpt is well favoured there.
Other popular approaches to designing shapes are using live addition/subtraction, Nomad supports that as well, but it's not as intentional as say 3D Studio Max (for Windows) - where animating those interactions is purposely included as their point of difference.
There's also solid geometry modelling, this is intended for items that would be manufactured, for that SolidWorks is common.
Then finally you have software like Maya which lets you take all manner of approaches, including a few I haven't listed here (such as scripted or procedural modelling.) The disadvantage here is that the learning curve is a mountain.
And if you like a bit of history, PovRay is still around where you feed it text mark-up to produce a render.
Open a new blender file and select Sculpt and you start with a high poly sphere. The default cube is not suitable for sculpting unless you subdivide it several times.
You can create whatever start up file you want.
Other approaches include subsurface workflows, metaballs, signed distance functions, procedural (geometry nodes) and grease pencil.
I'm curious about this too. On the engineering side, tools like Solidworks and Fusion start from extruding a planar sketch— which is a model that maps well to conventional manufacturing techniques, but isn't very artistic.
I really want to be able to pencil write on the coding apps on the go, now that handwritting reckognition has gotten good enough, but so far most of them provide a very lame experience.
It is like people don't really think out of the box on how to take advantage of new mediums.
Same applies to all those AI chat boxes, I don't want to type even more, I want to plain talk with my computer, or have AI based workflows integrated into tooling that feel like natural interactions.
> The initial platform where this idea will be tested is the Apple iPad Pro with Apple Pencil, followed by Android and other graphic tablets in the future.
Asus has been releasing year after year two performance beast, with a very portable form factor, multi-touch support and top of the line mobile CPUs and GPUs: the X13 and Z13
Considering the Surface Pro line also gets the newest Rizen AI chips with up to 32Gb of RAM, having them as second class citizen is kinda sad.
PS: blender already runs fine as-is on these machines. But getting a new touch paradigm would be extremely nice, and would be a better test best than a new platform IMHO.
Maybe the problem is just impossible or maybe AI assistance will solve it but it's crazy to me how complex 3d software like Blender/Maya/3DSMax/Houdini etc still are. There are 1000s and 1000s and 1000s of settings, Deep hierarchies of complexity. And mostly to build things that people built without computers and only a couple a few tools in the past. I had hoped VR (or AR) might some how magically make this all more approachable but no, it's not much easier in VR/AR. The only tool I've seen in the last 30 years that was semi easy was the Spore Creature creator though of course it had super limits.
I guess my hope now is that rather than select all the individual tools I just want AI to help me. At one level it might be it trying to recognize jestures as shapes. You draw a near circle, it makes it a perfect circle. Lots of apps have tried this. unfortunately they are so finicky and then you still need options to turn it off when you actually don't want a perfect circle. Add more features like that and you're back to an impenetrable app. That said, maybe if I could speak it. I draw a circle like sketch and say "make it a circle"?
But it's not just that, trying to model almost anything just takes soooooooooo long. You've got learn face selection, vertex selection, edge selection, extrusion options, chamfer options, bevel options, mirroring options, and on and on, and that's just the geometry for geometric based things (furniture, vehicles, buildings, appliances). Then you have to setup UVs, etc.....
And it gets worse for characters and organic things.
The process hasn't changed significantly since the mid-90s AFAICT. I leared 3ds in 95. I learned Maya in 2000. They're still basically the same apps 25-30 years later. And Blender fits right in in being just as giant and complex. Certain things have changed, sculpting like z-brush. Node geometry like Houdini. And lots of generators for buildings, furniture, plants, trees. But the basics are still the same, still tedious, still need 1000s of options.
This is excellent news. So many artists are now using procreate on iPad Pros as their primary platform. I do not miss the days of using puppet to juggle the configs of various overly expensive and user hostile dcc software. The barrier to entry used to be so high for designers.
I teach digital painting, and Procreate is slowly becoming my enemy. I fully appreciate its ease of use, its fantastic union with Apple Pen and certainly my students love it. But doing design/creative work on a small screen is not healthy, especially for complex images. neither is it easy to maintain a complex workflow, such as that required by matt painting and multi-layer compositing. Also, any presence of tablets in a design teaching lab is never pretty... I can't easily review their files or integrate their output into a pro desktop app.
Cost is likely another big reason it’s popular with students. A $13 one time purchase is hard to beat… even with edu pricing Adobe CC quickly gets more expensive. Clip Studio Paint falls somewhere in the middle.
Replying mainly to the title - but I'm surprised VR based 3D modeling never took off. I've only dabbled with blender, I know it's a powerful tool, but the learning curve just to navigate the interface is steep - compared to some creativity VR programs which felt instantly intuitive for 3D modeling. I guess for a professional digital artist, having fine technical control of your program is more important.
The biggest holdup for VR sculpting is you have nowhere to rest your hands or tools. In a physical medium you can rest your weight and tools against the clay, glass etc. that you're working with.
This is part of the reason why high end 3d cursors have resistive feedback, especially since fine motor control is much easier when you have something to push against and don't have to support the full weight of your arm.
I used some build of blender on windows mobile, crazy how efficiently it worked on 400Mhz HTC Niki, just the screen size, so the performance part seems to be done since forever :D
Having a UI/UX for tablets is awesome, especially for sculpting (zBrush launched their iPad version last year - but since Maxon bought it, all is subscription only).
I joined the Blender development fund last year, they do pretty awesome stuff.
Anyone else having issues with the videos playback? They don’t play at all on iPhone. Interaction design part is the most interesting for me, I’m curious to see what the team has come up with.
Speaking of interfaces, when will we have one that works just by thinking—something less intrusive than Neuralink—that lets us control not just Blender, but the entire computer? I think my productivity would increase a lot...
I worked in non invasive BCIs for a couple of years (this was about 7 years ago). My current horizon estimation for a “put a helmet and gave usable brain computer interface” is never.
With implants, we are probably decades away.
What currently works best is monitoring the motor cortex with implants as those signals are relatively simple to decode (and from what I recall we start to be able to get pretty fine control). Anything tied to higher level thought is far away.
As for thought itself, I wonder how would we go about it (assuming we manage to decode it). It’s akin to making a voice controller interface, except you have to tell aloud everything you are thinking.
Have you kept up with recent ML papers like MindEye, which have managed to reconstruct seen images using image generator models conditioned on fMRI signals?
Ever since that paper came out, I (someone who works in ML but have no neuroimaging expertise) have been really excited for the future of noninvasive BCU.
Would also be curious to know if you have any thoughts on the several start-ups working in parallel on optimally pumped magnetometers for portable MEG helmets.
> Have you kept up with recent ML papers like MindEye, which have managed to reconstruct seen images using image generator models conditioned on fMRI signals?
Not really. I left the field mostly because I felt bitter. I find that most papers in the field are more engineering than research. I skimmed through the MindEye paper and don’t find it very interesting. It’s more of mapping of “people looking at images in a fMRI” to identifying the shown image. They make the leap of saying that this is usable to detect actual mind’s eye (they cite a paper where they requires 40 hours of per-subject training, on the specific dataset) which I quite doubt. Also we’re nowhere near having a portable fMRI.
As for portable MEG, assuming they can do it: it would be indeed interesting. Since it still relies on synchronized regions I don’t think high level thinking detection is possible but it could be better for detecting motor activity and some mental states.
I know this is the classic eye-roll question, but is support planned for linux/desktop devices? I imagine the future android app could be used via waydroid but seeing how VLC could bridge the gap, perhaps?
The Wacom kernel drivers are so nice, especially with the neat little interface GNOME has in the settings. I got a secondhand Wacom tablet from 2002 at a garage sale that serves it's duty signing PDFs and sculpting in Blender on those rare occasions where it's needed.
Makes me wonder if anyone's playing osu! on their Steam Decks...
I have a small touchscreen linux device I use to view HN via 4g, it is a umpc laptop from donki called the nanote next, using the giant blender interface would be greatly enriched on that tiny device if I were to use an android experience.
Getting fully featured Blender on the iPad will be amazing for things like Grease Pencil (drawing in 3D) or texture painting, but on the other hand my side-project just became a little bit less relevant.
I'll have to take a look at whether I can make some contributions to Blender.
[0] https://apps.apple.com/nl/app/shapereality-3d-modeling/id674...
There is room for more than one modeling app on iOS as long as you can offer something that Blender doesn’t, even if it’s just better performance.
Eh ... blender is open source.
Basically, Blender says "start with a cube". I want to ask why and what are the other options.
As far as workflows go, there are far too many to list, and most artists use multiple, but common ones are:
* Sculpt from a base mesh, increasing resolution as you go with remeshing, subdivision, dyntopo, etc. * Constructively model with primitives and boolean modifiers. * Constructively model with metaballs. * Do everything via extrusion and joining, as well as other basic face and edge-oriented operators. * Use geometry nodes and/or shaders to programmatically end up with the result you want.
On the other end of the spectrum you have "sculping" for organic shapes and you might want dig into the community around ZBrush, if you want to fill the gap from "start with a cube" to real sculpting.
More niche, but where I feel home, is the model with coordinates and text input approach. I don't know if it has a real name, but here istwhere is pops up and where I worked with it:
- In original AutoCAD (think 90s) a lot of the heavy lifting was done on an integrated command line hacking in many coordinates
- POVRay (and its predecessors and relatives) has a really nice DSL to do CSG with algebraic hypersurfaces. Sounds way more scary than it is. Mark Shuttleworth used it on his space trip to make a render for example.
- Professionally I worked for a couple of years with a tool called FEMAP. I used a lot of coordinate entry and formulas to define stuff and FEMAP is well suited for that.
- Finally OpenSCAD for a contemporary example of the approach.
- sculpting. You start with a high density mesh and drag the points like clay.
- poly modeling. You start with a plane and extend the edges until you make the topology you want.
- box modelling. You start with a cube or other primitive shape and extend the faces until you make the shape you want.
- nurbs / patches : you create parts of a model by moving Bézier curves around and creating a surface between them
- B-reps / parametric / csg / cad / sdf / BSP : these are all similar in that you start with mathematically defined shapes and then combine or subtract them from each other to get what you want. They’re all different in implementation though
- photogrammetry / scans : you take imagery from multiple angles and correlate points between them to create a mesh
Other popular approaches to designing shapes are using live addition/subtraction, Nomad supports that as well, but it's not as intentional as say 3D Studio Max (for Windows) - where animating those interactions is purposely included as their point of difference.
There's also solid geometry modelling, this is intended for items that would be manufactured, for that SolidWorks is common.
Then finally you have software like Maya which lets you take all manner of approaches, including a few I haven't listed here (such as scripted or procedural modelling.) The disadvantage here is that the learning curve is a mountain.
And if you like a bit of history, PovRay is still around where you feed it text mark-up to produce a render.
You can create whatever start up file you want.
Other approaches include subsurface workflows, metaballs, signed distance functions, procedural (geometry nodes) and grease pencil.
I really want to be able to pencil write on the coding apps on the go, now that handwritting reckognition has gotten good enough, but so far most of them provide a very lame experience.
It is like people don't really think out of the box on how to take advantage of new mediums.
Same applies to all those AI chat boxes, I don't want to type even more, I want to plain talk with my computer, or have AI based workflows integrated into tooling that feel like natural interactions.
Asus has been releasing year after year two performance beast, with a very portable form factor, multi-touch support and top of the line mobile CPUs and GPUs: the X13 and Z13
https://rog.asus.com/laptops/rog-flow-series/?items=20392
Considering the Surface Pro line also gets the newest Rizen AI chips with up to 32Gb of RAM, having them as second class citizen is kinda sad.
PS: blender already runs fine as-is on these machines. But getting a new touch paradigm would be extremely nice, and would be a better test best than a new platform IMHO.
I guess my hope now is that rather than select all the individual tools I just want AI to help me. At one level it might be it trying to recognize jestures as shapes. You draw a near circle, it makes it a perfect circle. Lots of apps have tried this. unfortunately they are so finicky and then you still need options to turn it off when you actually don't want a perfect circle. Add more features like that and you're back to an impenetrable app. That said, maybe if I could speak it. I draw a circle like sketch and say "make it a circle"?
But it's not just that, trying to model almost anything just takes soooooooooo long. You've got learn face selection, vertex selection, edge selection, extrusion options, chamfer options, bevel options, mirroring options, and on and on, and that's just the geometry for geometric based things (furniture, vehicles, buildings, appliances). Then you have to setup UVs, etc.....
And it gets worse for characters and organic things.
The process hasn't changed significantly since the mid-90s AFAICT. I leared 3ds in 95. I learned Maya in 2000. They're still basically the same apps 25-30 years later. And Blender fits right in in being just as giant and complex. Certain things have changed, sculpting like z-brush. Node geometry like Houdini. And lots of generators for buildings, furniture, plants, trees. But the basics are still the same, still tedious, still need 1000s of options.
It's begging for disruption to something easier.
It’s just nice to be able to draw on the go or not be tied to a Cintiq.
You can connect the iPad to a larger screen and use it like a Cintiq though.
It’s definitely not fully featured enough for more complex art but the appeal is more than just the accessibility.
Almost all my professional concept artist friends have switched over, but I agree it’s not a great fit for matte paintings.
Almost all VR devices require lots of motion, have limited interaction affordances and have poor screens.
So you’re going to be more tired with a worse experience and will be working slower.
What XR is good for in the standard creative workflow is review at scale.
Blender does have an OpenXR plugin already but usability is hit or miss.
This is part of the reason why high end 3d cursors have resistive feedback, especially since fine motor control is much easier when you have something to push against and don't have to support the full weight of your arm.
Having a UI/UX for tablets is awesome, especially for sculpting (zBrush launched their iPad version last year - but since Maxon bought it, all is subscription only).
I joined the Blender development fund last year, they do pretty awesome stuff.
https://news.ycombinator.com/item?id=44622374 (2025-07-20; 62 comments)
Probably a useless submission but the discussion linked to the real thing at https://github.com/ahujasid/blender-mcp
(there even is https://www.printables.com/model/908684-spacemouse-mini-slim... - which I know works in freecad)
It makes navigating 3D spaces so much easier with keyboard and mouse.
With implants, we are probably decades away.
What currently works best is monitoring the motor cortex with implants as those signals are relatively simple to decode (and from what I recall we start to be able to get pretty fine control). Anything tied to higher level thought is far away.
As for thought itself, I wonder how would we go about it (assuming we manage to decode it). It’s akin to making a voice controller interface, except you have to tell aloud everything you are thinking.
Ever since that paper came out, I (someone who works in ML but have no neuroimaging expertise) have been really excited for the future of noninvasive BCU.
Would also be curious to know if you have any thoughts on the several start-ups working in parallel on optimally pumped magnetometers for portable MEG helmets.
Not really. I left the field mostly because I felt bitter. I find that most papers in the field are more engineering than research. I skimmed through the MindEye paper and don’t find it very interesting. It’s more of mapping of “people looking at images in a fMRI” to identifying the shown image. They make the leap of saying that this is usable to detect actual mind’s eye (they cite a paper where they requires 40 hours of per-subject training, on the specific dataset) which I quite doubt. Also we’re nowhere near having a portable fMRI.
As for portable MEG, assuming they can do it: it would be indeed interesting. Since it still relies on synchronized regions I don’t think high level thinking detection is possible but it could be better for detecting motor activity and some mental states.
In theory, if Blender exposed their UI to the Apple accessibility system , it would let you use things via BCI.
Are you looking to use Blender on a small touch screen backed by desktop Linux?
https://www.reddit.com/r/wacom/comments/16215v6/wacom_one_ge...
Looking into using the new gen 2 w/ touch on an rPi 5.
Makes me wonder if anyone's playing osu! on their Steam Decks...