Let’s pause for a bit and dwell on the absurd amount of RAM it takes to run it even after this exercise. Anyone here remember when QNX shipped a demo in 2000 with a kernel, GUI, web browser and an email client on a single 3.5” floppy? The memory footprint was also a few megabytes. I’m not saying we should be staying within some miserly arbitrary constraints, but my goodness something that draws UI and manages processes has not grown in complexity by four orders of magnitude in 20 years.
Hasn't it, though? HDR, fluid animations, monstrous resolutions, 3D everything, accessibility, fancy APIs for easier development allowing for more features, support for large amounts of devices, backwards compatibility, browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves, email clients have stayed mostly the same at least except for the part that they also ship a browser and few of us even use 'em anymore!
Some of those features combine exponentially in complexity and hardware requirements, and some optimizations will trade memory for speed.
Not going to defend particular implementations, but requirements? Those have definitely grown more than we give them credit.
That's the desktop compositor. Windows 7 already had one and ran on 1 GB of RAM.
> accessibility
Not everyone needs it, so it should be an optional installable component for those who do.
> fancy APIs for easier development allowing for more features
That still use win32 under the hood. Again, .net has existed for a very long time. MFC has existed for an even longer time.
> support for large amounts of devices
No one asked for Windows on touchscreen anything. Microsoft decided that themselves and ruined the UX for the remaining 99% of the users that still use a mouse and a keyboard.
> backwards compatibility
That's what Microsoft does historically, nothing new here.
> browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves
No one asked for this. My personal opinion is that everything app-like about browsers needs to be undone, yesterday, and they should again become the hypertext document viewers they were meant to be. Even JS is too much, but I guess it does have to stay.
I think you have to reason this one out. Your statement, to me, doesn’t hold water.
Let’s start with HDR. That requires the content that’s being rendered to have higher bit depth. Not all of this is stored in GPU memory at once, a lot is stored in system RAM and shuffled in and out.
Now take fluid animations. The interpolation of positions isn’t done solely on the GPU. It’s coordinated by the CPU. I don’t think this one necessarily adds ram usage but I think your comment is incorrect.
And lastly with resolutions, the GPU is only responsible for the processing and output. You still need high resolution data going in. This is easily observed by viewing any low resolution image. It will be heavily blurred or pixelated on a high resolution screen. That stands to reason that the OS needs to have high enough resolution assets to accommodate high resolution screens. Now these aren’t all stored on disc necessarily as high resolution graphics but they have to be stored in memory as such.
——
As to the rest of your points, they basically boil down to: I don’t want it so I don’t see why a default install should have it. Other people do want a highly feature full browser that can keep up with the modern web. And given that webviews are a huge part of application rendering today, the browser actively contributes to memory usage.
>> Let’s start with HDR. That requires the content that’s being rendered to have higher bit depth. Not all of this is stored in GPU memory at once, a lot is stored in system RAM and shuffled in and out.
HDR can still fit in 32bit pixels. At 4k X 2k we have 8 megapixels or 32MB frame buffer. With triple buffering that's still under 100MB. Video games have been doing all sorts of animation for decades. It's not a lot of code and a modern CPU can actually composite a desktop in software pretty well. We use the GPU for speed, but that doesn't have to mean more memory.
The difference between 2000 and 2023 is the quantity of data to move and like I said, that about 100MB
Unintuitively, your two questions are somewhat at odds with each other.
The more work you do on the GPU, the more you need to shuffle because the more GPU memory you’d use AND the more state you’d need to check back on the CPU side, causing sync stalls. It’s not insurmountable, and macOS puts a lot more of its work on the GPU for example. Windows is a little more conservative in that regard.
Here are some more confounding factors:
- Every app needs one or more buffers to draw into. Especially with hidpi screens this can eat up memory quick. The compositor can juggle these to try and get some efficiency, but it can’t move all the state to the GPU due to latency.
- you also need to deal with swap memory. You’d ultimately need to shuffle date back to the system ram and then to disk and back which is fairly slow. It’s much better theoretically on APUs though.
Theoretically, APUs stand to solve a lot of these issues because they blur the lines of GPU and CPU memory.
Direct storage doesn’t address the majority of these concerns though. It only means the CPU doesn’t need to load data first to shuffle it over, but it doesn’t help if the CPU does need to access said data or schedule it.
It’s largely applicable mainly to games where resource access is known ahead of time.
Only if you’re dealing with just the desktop environment and don’t allow the user to load applications. Or if those apps also didn’t allow dynamicism of any kind, like loading images from a website
> > browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves
> No one asked for this. My personal opinion is that everything app-like about browsers needs to be undone, yesterday, and they should again become the hypertext document viewers they were meant to be. Even JS is too much, but I guess it does have to stay.
People did ask for this, because it made them a lot of money.
You should recognize your opinion is a minority one outside of tech (and possibly, there too).
To wit, virtually no one is jumping to Gopher or Gemini.
What people want is a way to run amazon.com (and gmail and slack and so on), on any of their devices, securely, and without the fuss of installing anything.
Ideally the first-time use of amazon.com should involve nothing more than typing "amazon" and hitting enter. It should to show content almost instantly.
Satisfying that user need doesn't require a web browser. If OS vendors provided a way to do that today, we'd be using it. But they don't.
OS vendors still don't understand that. They assume people forever want to install software via a package manager. They assume software developers care about their platform's special features enough to bother learning Kotlin / Swift / GTK / C# / whatever. And they assume all software users run should be trusted with all of my local files.
Why is docker popular? Because it lets you type the name of some software. The software is downloaded from the internet. The software runs on linux/mac/windows. And it runs in a sandbox. Just like the web.
The web - for all its flaws - is still the only platform which delivers that experience to end users.
I'd throw out javascript and the DOM and all that rubbish in a heartbeat if we had any better option.
> What people want is a way to run amazon.com (and gmail and slack and so on)
Guess what, both GMail and Slack have video calls. They use WebRTC. The browser has to support it. So the WebRTC code is a part of it.
> Ideally the first-time use of amazon.com should involve nothing more than typing "amazon" and hitting enter. It should to show content almost instantly.
And it does. Open an incognito tab, type amazon.com, it's pretty crazy how fast it loads, with all the images.
You're just proposing to move all the complexity of the browser into some other VM that would have to be shipped by default by all OS platforms before it could become useful.
Java tried exactly this, and it never took off in the desktop OS world. It wasn't significantly slimmer than browsers either, so it wouldn't have addressed any of your concerns.
Also, hyperlinking deep into and out of apps is still something that would be very very hard to achieve if the apps weren't web native - especially given the need to share data along with the links, but in a way that doesn't break security. I would predict that if you tried to recreate a platform with similar capabilities, you would end up reinventing 90% of web tech (though hopefully with a saner GUI model than the awfulness of HTML+CSS+JS).
> You're just proposing to move all the complexity of the browser into some other VM that would have to be shipped by default by all OS platforms before it could become useful.
I'm not proposing that. I didn't propose any solution to this in my comment. For what its worth, I agree with you - another java swing style approach would be a terrible idea. And I have an irrational hate for docker.
If I were in solution mode, what I think we need is all the browser features to be added to desktop operating systems. And those features being:
- Cross platform apps of some kind
- The app should be able to run "directly" from the internet in a lightweight way like web pages do. I shouldn't need to install apps to run them.
- Fierce browser tab style sandboxing.
If the goal was to compete with the browser, apps would need to use mostly platform-native controls like browsers do. WASM would be my tool of choice at this point, since then people can make apps in any language.
Unfortunately, executing this well would probably cost 7-10 figures. And it'd probably need buy in from Apple, Google, Microsoft and maybe GTK and KDE people. (Since we'd want linux, macos, ios, android and windows versions of the UI libraries). Ideally this would all get embedded in the respective operating systems so users don't have to install anything special, otherwise the core appeal would be gone.
Who knows if it'll ever happen, or if we'll just be stuck with the web forever. But a man can dream.
My thinking is that, ultimately, if you want to run the same code on Windows, MacOS, and a few popular Linux distros, and to do so on x86 and ARM, you need some kind of VM that translates an intermediate code to the machine code, and that implements a whole ton of system APIs for each platform. Especially if you want access to a GUI, networking, location, 3D graphics, Bluetooth, sound etc. - all of which have virtually no standardization between these platforms.
You'll then have to convince Microsoft, Apple, Google, IBM RedHat, Canonical, the Debian project, and a few others, to actually package this VM with their OSs, so that users don't have to manually choose to install it.
Then, you need to come up with some system of integrating this with, at a minimum, password managers, SAML and OAuth2, or you'll have something far less usable and secure than an equivalent web app. You'll probably have to integrate it with many more web technologies in fact, as people will eventually want to be able to show some web pages or web-formatted emails inside their apps.
So, my prediction is that any such effort will end-up reimplementing the browser, with little to no advantages when all is said and done.
Personally, I hate developing any web-like app. The GUI stack in particular is atrocious, with virtually no usable built-in controls, leading to a proliferation of toolkits and frameworks that do half the job and can't talk to each other. I'm hopeful that WASM will eventually allow more mature GUI frameworks to be used in web apps in a cross-platform manner, and we can forget about using a document markup language for designing application UIs. But otherwise, I think the web model is here to stay, and has in fact proven to be the most successful app ecosystem ever tried, by far (especially when counting the numerous iOS and Android apps that are entirely web views).
> You'll then have to convince Microsoft, Apple, Google, IBM RedHat, Canonical, the Debian project, and a few others, to actually package this VM with their OSs, so that users don't have to manually choose to install it.
I think this is the easy part. Everyone is already on board with webassembly. The hard part would be coming up with a common api which paves over all the platform idiosyncrasies in a way that feels good and native everywhere, and that developers actually want to use.
> what I think we need is all the browser features to be added to desktop operating systems.
I trust you are aware Microsoft did exactly that, and the entire tech world exploded in annger, and the US Government took Microsoft to court to make them undo it on the grounds that integrating browser technology into the OS was a monopolistic activity[0].
While I agree with you, I don’t think the people really wanted this. I mean, life wasn’t miserable when web apps didn’t existed.
We could have lived in an alternative universe where we succeeded to teach people the basics of how to use the computer as a powerful tool for themselves.
Instead, corporations rushed to make most of the things super easy to make billions on the way.
I’d even say that this wasn’t really a problem until they realized that closed computers allowed them more control and more money.
So yeah, now we are stuck with web apps on closed systems and most people are happy with it, that’s true.
And, as the time passes, we are loosing the universal access to "the computer". Instead of a great tool for enabling power to the people, it’s being transformed to a prison to control what the people can do, see and even think.
ps : When I say "computer" I include PC, phones, tablets, voice assistants … everything with a processor running arbitrary programs.
I disagree.
When I want to deliver a piece of software to my parents I first think about a web solution (they are a symbol for me for >80% of PC users).
I just uninstalled a browser tool bar from my step father's pc last weekend.
There are simply to many bad actors out there.
The browser sandbox works pretty well against them.
My parents have become very hesitant to install anything, even iOS updates, because they don't like change and fear that they might do something wrong.
I agree that JS is not a gold standard. Still it works most of the time and with typescript stapled on top it is acceptable.
Time has proven again and again (not only in tech) that the simple solutions will prevail.
Want to change it? Build a simpler and better solution.
I don't like that too but that's human nature at work.
I'm so sick of people shutting down valid opinions because they have a "minority opinion" about tech. That tech slobbers so messily over the majority -- and, seemingly, ONLY the majority -- is a massive disservice to all of the nerds and power users that put these people where they are today.
Maybe, instead of shutting those opinions down, you should reflect on how you, in whatever capacity you serve our awful tech overlords, can work to make these voices more heard and included in software/feature design
I hear you, but OP said 'no one asked for this' but people did ask for this. The whole argument was about popularity of the idea to add features to browsers.
I'd also like to add that accessibility is not a binary that's either on or off. Parent comment might be thinking of features for people with high disability ratings, but eventually everyone has some level of disability. Some even start of life with one: color blindness, vision impairment. Most people have progressive near vision loss (presbyopia) as they age.
Also, disability may not be permanent. I recently underwent major surgery and for at least a few days afterwards using my cell phone was nearly impossible. I resorted voice control a few times because I did not have the coordination or cognitive function to type. (Aside: cell phones in general are accessibility dumpster fires, but it took a major life event to demonstrate to me how bad it really is.)
So no, accessibility is not just a toggle switch or installable library. In fact, I hope future UI design incorporates some kind of non-intrusive learning and adaptability, such that when the system detects the user continually making certain kinds of errors, the UI will adapt to help.
Of course. Navigating around the install process without accessibility already enabled is going to be a non-starter for many.
As for why all the bloat? I speculate it's because accessibility features are a second-class citizen at best, and when it comes to optimizing and streamlining, all the effort in development goes into the most-used features, whether or not they are the most essential.
I'm suggesting the modern accessibility support doesn't need more memory than the entirety of windows 95. So 4MB extra, or let's say 10x that to be generous.
Yes. At least in Windows 10 is a disaster. Without high contrast, which looks terrible, it draws gray colors on light background making it difficult to read.
Accessibility is much more than just labels for a screen reader. Please stop trivializing anything that you don’t use directly, it’s a common thread between all your comments, and it’s a disservice to both the points you’re trying to make and the people who actually use those things .
Accessibility includes interaction design, zoom ability, audio commands, action link ups, alternate rendering modes, alternate motion modes, hooks for assistive devices to interact with the system. It goes far deeper into the system than just labels for a screen reader.
If you stopped to just think about the vast number of disabilities out there, you’d realize how untrue your statement is.
All that extra crap doesn't make any sense, when the earliest versions of Windows up to ~7 had controls to let you adjust the UI to exactly how you'd like it, which is of course very important for accessibility.
Then starting with Windows 8, they removed a lot of those features. 11 is even worse.
My point is that accessibility being a thing shouldn't ruin the UI for the people who don't need it. There's no need to visually redesign anything to introduce accessibility. Apps don't need to be made aware whether some control has focus because the user has pressed the tab key, or because it's being focused by a screen reader, or because of some other assistive technology. Colors and font sizes can also be configured and they've been configurable since at least Windows 3.1 — and that is exposed to apps.
Again, I don't see how the things you specified can't be built into existing win32 APIs and why anything needs to be designed from the ground up to support them.
Your point about “apps don’t need to be made aware” is precisely the reason accessibility is part of the system UI framework.
Accessibility is also not something that is just a binary. You may be slightly short sighted and need larger text, you might need an OS specified colour palette that overrides the apps rendering. There’s just so many levels of nuance here. It’s not just “apps can configure a palette”, it’s that they need to work across the system
If you have the time, I really suggest watching the Apple developer videos on accessibility to see why it’s not just as simple as you put it. Microsoft do a lot of great work for accessibility too , they just don’t have much content up to delve into it.
As to why it has to be developed from the ground up, it doesn’t, but it needs to be at the foundation regardless. Apple for example didn’t redo their UI for accessibility, however Microsoft take a more “we won’t touch existing stuff in case we break it” approach to their core libs.
Also , again, I’d point out that you’re purposefully trying to trivialize something you don’t use.
> It’s not just “apps can configure a palette”, it’s that they need to work across the system
There is a system-provided color palette. I don't know where this UI is in modern Windows, but in versions where you could enable the "classic" theme, you could still configure these colors. They are, of course, exposed to apps, and apps are expected to use them to draw their controls. That, as well as theme elements since XP.
> Microsoft take a more “we won’t touch existing stuff in case we break it” approach to their core libs.
Making sure you don't break existing functionality is called regression testing. I'm sure Microsoft already does a lot of it for each release.
And actually it's not quite that. The transition from 9x to NT involved swapping an entire kernel from underneath apps. Most apps didn't notice it. In fact, the backwards compatibility is maintained so well that I can run apps from the 90s — built for, and only tested on, the old DOS-based Windows versions — on my modern ARM Mac, in a VM, through an x86 -> ARM translation layer.
> Accessibility includes interaction design, zoom ability, audio commands, action link ups, alternate rendering modes, alternate motion modes, hooks for assistive devices to interact with the system. It goes far deeper into the system than just labels for a screen reader.
I wonder where the current status quo lies in regards to both desktop computing and web applications/sites. Which OSes and which GUI frameworks for those are the best or worst, how do they compare? How have they evolved over time? Which web frameworks/libraries give one the best starting point to iterate upon, say, component libraries and how well they integrate with something like React/Angular/Vue?
Sadly I'm not knowledgeable enough at the moment to answer all of those in detail myself, but there are at least some tools for web development.
For example, this seems to have helpful output: https://accessibilitytest.org
There was also this one, albeit a bit more limited: https://www.accessibilitychecker.org
I also found this, but it seemed straight up broken because it couldn't reach my site: https://wave.webaim.org/
From what I can tell, there are many tools like this: https://www.w3.org/WAI/ER/tools/
And yet, while we talk about accessibility occasionally, we don't talk about how good of a starting point certain component frameworks (e.g. Bootstrap vs PrimeFaces/PrimeNG/PrimeVue, Ant Design, ...) provide us with, or how easy it is to setup build toolchains for automated testing and reporting of warnings.
As for OS related things, I guess seeing how well Qt, GTK and other solutions support the OS functionality and what that functionality even is is probably a whole topic in of itself.
Accessibility checkers can be helpful, particularly for catching basic errors before they ship. The large majority of accessibility problems a site can have cannot be identified by software, humans need to find them.
Current Bootstrap is not bad if you read and follow all of their advice. I'm not claiming there are no problems lurking amongst their offerings.
If you search for "name-of-thing accessibility" and don't find extensive details about accessibility in the thing's own documentation, it probably does a poor job. A framework can't prevent developers from making mistakes.
"The large majority of accessibility problems a site can have cannot be identified by software"
Bold statement. I used to work in exactly that area and the reality is humans often simply don't bother finding many of the accessibility issues that automated tools can and do find. Even if such a tool isn't able to accurately pinpoint every possible issue, and inevitably gives a number of false positives (the classic being expecting everything to have ALT text, even when images are essentially decorative and don't provide information to the user), the use of it at least provides a starting point for humans to be able to realistically find the most serious issues and ensure they're addressed.
However I would never claim that good accessibility support requires significantly more (e.g. >2x) resources, and certainly not at the OS level.
In fact, you typically get better accessibility if you use the built-in OS (or browser) provided controls, which are less resource intensive than the fancy custom ones app seems to like using these days (even MS's own apps are heavy on custom-controls for everything).
I currently work in this area (web accessibility) and am just repeating what is commonly understood. When considering what WCAG criteria cover (which is not even everything that could pose a barrier to people with disabilities), most failures to meet the criteria cannot be identified by software alone.
For example, the classic I would say is not whether an image needs an alt attribute or not but whether an image's alt attribute value is a meaningful equivalent to the image in the context where it appears.
I'm not sure what kind of "resources" you're referring to. If you mean computing resources (CPU, RAM, etc.) standard, contemporary computers do seem to have enough for current assistive technologies, one doesn't need to buy a higher end computer to run them. If you mean OS resources for supplying assistive technologies and accessibility APIs, mainstream OS's are decent but specifically for screen readers there's a lot of room for improvement.
> Which OSes and which GUI frameworks for those are the best or worst, how do they compare?
Hands down macOS/iOS are the leaders here with Cocoa/SwiftUI/UIKit etc (ultimately basically the same). The OS also has many hooks to allow third party frameworks to tie in to the accessibility.
Windows is second in my opinion. Microsoft does some good work here but it’s not as extensive in terms of integrations and pervasiveness due to how varied their ecosystem is now. They do however do excellent work on the gaming side with their accessibility controllers.
In terms of UI frameworks, Qt is decent but not great . Electron actually does well here because it can piggy back off the work done for web browsers. Stuff like Imgui etc rank at the bottom because they don’t expose the tree to the OS in a meaningful way.
I can’t speak to web frameworks. In theory it shouldn’t matter as long as the components are good. Many node frameworks try and install a11y as a package to encourage better accessibility.
I switched from windows to macOS, which I’ve been using as my daily driver for the last year or so. Using the touchpad (or maybe the Magic Mouse) is basically a requirement to use “vanilla” macOS. Yes, you can install additional programs to help with window management, etc., but in my experience macOS is absolutely horrible when it comes to accessibility, from this standpoint. Maybe it’s better for colors, TTS, etc.?
I’m not sure what walls you might have been hitting but macOS is completely useable with speech direction. I had to quite recently add better accessibility support to an app I worked on and I was basically navigating the entire system with voice control and keyboard hotkeys.
Voice control in particular is really handy with the number and grid overlays for providing commands.
I’ll check it out. But this seems to approach accessibility as a feature to be turned on or off. Most of what it enables, based on Apple docs, is not just enabled in Windows and many Linux window managers I’ve used, but it’s something that developers actively utilize.
That's not where macOS came from. For Windows and Linux, "in the beginning was the command line" but not for Macs.
There's plenty one can do in macOS and its native applications with a keyboard by default, those that need more can enable "Use keyboard navigation to move focus between controls." Those that need even more enable Full Keyboard Access. These settings aren't on be default because Apple has decided they'd just get in the way and/or confuse people who use the keyboard but rely on it less.
In Safari specifically, by default pressing Tab doesn't focus links as it does in every other browser because most people use a cursor to activate links, not the keyboard. There also tend to be a lot more links than what Tab does focus, form inputs.
Macs try to have just enough accessibility features enabled by default that anyone who needs more can get to the setting to turn it on. Something I just learned Macs have that other OS/hardware doesn't is audible feedback for the blind to login when a Mac is turned on while full disk encryption is enabled.
I'm not claiming Apple gets everything right or that their approach is the best, I'm just trying to describe the basics of what's there and the outlook driving the choices.
I want touchscreen support on Windows. But guess what? Multitouch worked in Windows 7. If Windows still supported theming basic controls then Microsoft could enable touch screen support in most applications by setting a theme, similar to how they enhance contrast if you enable that feature.
I understand that bigger stuff and better graphics involve more RAM and the switch to 64 bit doubled the pointer sizes (which is why you can't meaningfully run Windows 7 x64 on 1GB of RAM like you can the 32 bit version) but with 4GB of system RAM you should be able to fit everything in and then some.
You actually can, as various Linux distributions demonstrate. The algorithms and APIs aren't as well developed, but better window control/accessibility APIs don't take up more than a megabyte of RAM.
People do ask for many Microsoft features, such as the appification of the interface and the Microsoft store. Just because you didn't ask for it, doesn't mean it's not necessary. However, Microsoft has known for years how to build and implement those requests in a much more compact environment.
My take is still the same old cynical one: as resources become cheaper, developers become lazier. I don't want to go back to the days of racing the beam with carefully planned instructions but the moment Electron gained any popularity the ecosystem went too far. "Yes but our customers want features more than a small footprint" is the common excuses I hear, but that's ignoring all the people calling various support channels or just being miserable with their terribly slow machine.
> as resources become cheaper, developers become lazier.
At most places I've worked it's a struggle to get time allocated towards necessary refactoring that'll ensure new features can be delivered in a timely fashion.
I'd love to spend time making the product more efficient but unless I can demonstrate immediate and tangible business value in doing so, it's never going to be approved over working on new features.
>No one asked for Windows on touchscreen anything. Microsoft decided that themselves and ruined the UX for the remaining 99% of the users that still use a mouse and a keyboard.
I have several devices, including a couple Linux PC's, an M1 macbook air, and a Microsoft Surface Go. If Windows 11 didn't support touchscreens, I would have gone with an iPad. However, Windows 11 is the _best_ touchscreen OS to-date.
Unlike iOS or iPadOS, Windows 11 runs desktop apps and combines the convenience of touchscreen scrolling/interaction with the desktop experience. Windows 11 does this very, very well.
I'm curious if you've used Chrome OS recently, there's a lot of good work there too. Touch is there if you need it with the keyboard open, then goes into tablet mode if the laptop is convertible or detachable. The touch/tablet UI has lost many rough edges in the last 2-3 years, and it hasn't affected the mouse/keyboard mode most people use Chromebooks for.
I don't use Windows anymore but I remember thinking "this is exactly what I've always wanted from a convertible/touch-support-in-desktop OS"...
I think I first saw it running on a Geforce with 64MB of RAM. Even then it was smooth as butter.
Now that I think about it, Mac OSX was doing GPU compositing back in 2000/2001 and those machine usually only had about 16MB of VRAM. I remember it running fairly well on a 2005 MacMini G4 with 32MB of VRAM.
The first versions of Mac OS X only supported software rendering. GPU compositing didn't show up until 2003, in Mac OS X 10.2. It was branded as "Quartz Extreme".
I did not know that! There was about a 6-7 year gap between 1997-2004 where I didn't really do much with Mac's. But your timeline seems spot on, it was 10.3 when they introduced Expose into the system. A great demonstration of the GPU functionality in action.
Actually, IIRC the only requirement for DWM to work was a GPU that supports shaders, because that's what makes the window border translucency/blur effect possible.
Compatible driver, actually. There were at least DWM 1.0 (Vista) and DWM 1.2 (Win7), but Intel never provided a compatible driver for... 915? Series, so you could't enable composition on them, despite hardware were capable enough.
Prodigy had vector based graphics in a terminal back in the 1980’s. Granted, that targeted EGA and 2400 baud modems, but I wonder how well it would work on modern hardware if you just gave it a 4k, 24bit frame buffer, and fixed up the inevitable integer overflows.
Actually, I've run Citrix (ancestor of Remote Desktop) on a 14.4k modem. Once all the bitmaps are downloaded and cached (those app launch 1/2 screen splash pages were murder), it ran pretty well. The meta graphic operations (lines, circles, fills, etc.), fonts, etc. worked fine. Any large pixmap operations were crushing, but most productivity apps didn't use those as much as you'd think.
You didn't ask. It is, as you say, your personal opinion.
From my POV, current Web is fine and the fact that browsers are powerful liberated us from writing specialized desktop apps for various OSes. I am much happier writing a Web UI than hacking together Win32 or Qt-based apps. Or, God forbid, AVKON Symbian OS UI. That was its own circle of hell.
> liberated us from writing specialized desktop apps for various OSes
I use macOS and I very much dislike anything built with cross-platform GUI toolkits, and especially the web stack. And it's always painfully obvious when something is not native. It doesn't behave like the rest of the system. It's not mac-like. It draws its own buttons from scratch and does its own event handling on them instead of using NSButton. I don't want that kind of "liberation". I want proper, native, consistent apps. Most other people probably do too, they just don't realize that or can't put it into words.
The only counter-example out there known to me is IntelliJ-based IDEs. They're built with Swing, but they do somehow feel native enough.
Also, developer experience is not a something users care about. And I'm saying that as a developer myself. Do use fancy tools to make your job easier, sure, but avoid those of them that stay inside your product when you ship it.
I don’t like the direction GUIs have gone either, and think the JavaScript-ization of everything has been pretty dumb. But it seems that bloat is doing well in the market.
Users might not care about developer experience, but everything is a trade off: developer time is a cost, the cost of producing software is an input into how much it needs to cost. Users seem to want features delivered quickly, without much regard to implementation quality.
Users just don't have much say in the matter. Case in point: Discord and Slack are atrocious UX-wise. You're still forced to use them because, as with any walled-garden communication service, you aren't the one making this choice.
Hold up. It's been ~14 years since Apple shipped machines with 2GB of memory as base their base model.
macOS (and iOS) have incredibly good screen reader support, as well as all of the things you're complaining about in your original comment at the top of this thread. Clearly those things are absolutely gobbling memory, and yet you don't seem to connect the dots that they're directly contributing to high memory requirements of macOS?
I mean, 8GB on stock machines today is barely manageable. You can't buy a Mac with less than 8GB today; you can't even buy a phone with 2GB or less. I'm not sure you're in an position to rail against high-memory bloat in computing today.
p.s. I say this as someone who uses macOS as their daily driver and has for a very long time
> I'm not sure you're in an position to rail against high-memory bloat in computing today.
Nobody is a hypocrite for buying X gigabytes of ram but also wanting the naked operating system to use a much smaller amount, or wanting single programs to use a much smaller amount.
> macOS (and iOS) have incredibly good screen reader support, as well as all of the things you're complaining about in your original comment at the top of this thread. Clearly those things are absolutely gobbling memory, and yet you don't seem to connect the dots that they're directly contributing to high memory requirements of macOS?
What makes a screen reader gobble memory?
And it definitely shouldn't gobble memory when it's not running.
Mainly the TTS engine being ready for input, stuff like that. Of course you could go to Linux where you have to enable assistive technologies support before the whole desktop understands that they should work with screen readers. I'm guessing that there is where accessibility does take up RAM and resources.
Screen reader support by itself doesn't gobble memory. Android has had it for ages, and still runs on devices with less than 1 GB RAM (Android Wear watches).
Running several instances of Chromium though... You'll probably run one anyway at all times as your actual web browser, but additional ones in the form of "oh so easy to build" Electron apps don't help. In Apple's eyes, though, you should absolutely ignore other browsers and use Safari exclusively. It might not be as much of a memory hog as Chrome — I haven't researched this, this is simply my guesses.
I also heard that M1 Macs are better at memory management compared to Intel. Again, I don't have any concrete evidence to back this up, but knowing Apple, it's believable.
It liberated you as developer. As developer, I could understand. As user I hate you. You never provide me as user with native experience via web UI. You use custom controls, which broke conventions of native controls a little bit there and here. You can not use full power of OS (YouTube or Spotify player doesn't pause itself when workstation is locked, my native player of choice does). You eat my resources. You cannot make your application consistent with application from other vendor, so I need to remember different pattern for different apps. Your typical browser app doesn't have ANY features for power users, like shortcuts for all commands and useful keyboard controls (not to mention full customization of these controls, toolbars, etc). Damn you and your laziness!
But I understand, that most of my complains are complains of power user with 25+ years of experience and muscle memory, and I'm not target auditory for almost any new app. You win :-(
Everything is a trade-off. If, as a developer, you have to spend ungodly hours on learning multiple UIs, you will have less time left for the actual business logic of your app. Which, from the user's side, means one of the following three:
a) nice looking, but less capable apps,
b) more expensive apps, or, apps that have to be paid even if they could be free in an alternate universe,
c) limited availability - app X only exists for Windows and not Mac, because either a Mac programmer isn't available or would be too expensive.
Developing for multiple UIs at once is both prone to errors and more expensive, you wind up paying for extra developers, extra testers/QA, extra hardware and possibly extra IDEs and various fees. Such extra cost may be negligible for Google, but is absolutely a factor for small software houses outside the richest countries, much more so for "one person shows" and various underfunded OSS projects.
I remember the hell that was Nokia Series 60 and 90 programming. Nokia churned out a deluge of devices that theoretically shared the same OS, but they had so many device-specific quirks and oddities on the UI level that you spent most of the time fighting with (bad) emulators of devices you could not afford to buy. This is the other extreme and I am happy that it seems to be gone forever.
If your application can be useful on different OSes (and now there are only 3 OSes in existence, as porting Desktop application on Mobile requires completely different UI and UX no matter what technology you use!), break it into business logic and UI and find partner or hire developer who love to develop native UIs for other OS. MVC pattern is old and well known (though not fashionable now, I understand).
OSS projects are completely different story, of course, no questions to OSS developers.
I prefer to pay $200 for native application than $100 for Electron one.
Oh, whom do I try to fool? Of course, it will be Electron app with $9.95/month subscription now :-(
"break it into business logic and UI and find partner or hire developer who love to develop native UIs for other OS"
As I said in my previous comment, this is quite expensive, and people inside Silicon Valley rarely understand how cash-strapped the software sector in the rest of the world is. In Czech, we have a saying "a person who is fed won't believe a hungry one" and SV veterans that are used to reams of VC cash supporting even lossy businesses like Uber have no idea that the excess spending needed to hire another developer for several months somewhere in Warsaw or Bucharest may kill a fledgling or small company.
An optional installable component until you have a blind person doing tech support and they have to walk the tech illiterate person through installing the accessibility stack lol. Oh until you suddenly go blind from a condision or accident and have to mouse your way through the interface, blind, to install that component. Ugh ableism.
Being someone from back in those days they'll tell you to load up that software that fits in some small amount of memory. You'll find most of it is crash filled hot garbage missing the features you need. And the moment you wanted to add new features you'd start importing libraries bloating the size of the application.
In generally I would say far more stable and far more features.
But this of course is in the metrics of how you measure. Windows 3.1 for example was a huge crashing piece of crap that was locking up all the damned time. MacOS at the time wasn't that much better. Now I can leave windows up for a month at a time between security reboots. Specialized Windows and Linux machines in server environments on a reduced patching schedule will stay up far longer, but generally security updates are what limits the uptime.
I remember running Windows applications and receiving buffer overflow errors back then. If you got a buffer overflow message today you'd think that either your hardware is going bad or someone wrote a terrible security flaw into your application. And back there were security flaws everywhere. 'Smashing the stack for fun and profit' wasn't wrote till '95, well after consumers had started getting on the internet in mass. And if you were using applications like Word or Excel you could expect to measure 'crashes' per week rather than the crashes per month, many of which are completely recoverable in applications like office.
I'm on Win11 for 1.5 year or so (Win11 Insider Beta channel) and before was on Win10 Beta/Dev channels - so what I remember so far, I was warned multiple times, suggested to pick a time and only after user (me) not shown any cooperation, system was forcibly rebooted, which for consumer grade (I have Pro version) edition is fine, from my PoV. I don't want [my] system and systems around me be a part of botnets like Linux boxes of all sorts.
> For many applications Windows 10 saves state and comes back right where you started on a security update reboot.
This needs application support, by this broad definition all operating systems "saves state and comes back right where you started on a security update reboot".
Resolutions and HDR are one area where I think the extra RAM load and increasing application sizes make complete sense. However, my monitors run at 1080p, don't do HDR, and my video files are rncoded at a standard colour depth. Despite all this, the standalone RAM usage has increased over the years.
Accessibility has actually gone down with the switch to web applications. Microsoft had an excellent accessibility framework with subpar but usable tooling built in, and excellent commercial applications to make use of the existing API, all the way back in Windows XP. Backwards compatibility hacks such as loading old memory manager behaviour and allocating extra buffer space for known buggy applications may take more RAM but don't increase any requirements.
Inagree that requirements have grown but not by the amount reflected in standby CPU and memory use. Don't forget that we've also gained near universal SSD availability, negating the need for RAM caches in many circumstances. And that's just ignoring the advance in CPU and GPU performance since the Windows XP days, when DOS was finally killed off and the amount of necessary custom tailored assembly drastically dropped.
When I boot a Windows XP machine, the only thing I can say I'm really missing as a user is application support. Alright, the Windows XP kernel was incredibly insecure, so let's upgrade to Windows 7 where the painful Vista driver days are behind us and the kernel has been reshaped to put a huge amount of vulnerable code in userspace. What am I missing now? Touchscreen and pen support works, 4k resolutions and higher are supported perfectly fine, almost all modern games still run.
The Steam hardware survey says it all. The largest target audience using their computer components the most runs one or two 1080p monitors, has 6 CPU cores and about 8GB of RAM. Your average consumer doesn't need or use all of that. HiDPI and HDR are a niche and designing your OS around a niche is stupid.
True, but with those access times you can wait a lot longer for content to be loaded into RAM. Hard drives are the reason for many years games needed to duplicate their assets, for example, because seek times slowed down loading time and putting the same content in the file twice but at the right place would speed up the loading process significantly. Games today still have special HDD code because of the difference in performance class.
SSDs won't replace RAM but many RAM caches aren't performance critical; sometimes you need your code to be reasonably fast on a laptop with a 5400 rpm hard drive and then you have very little choice of data structures. With the random access patterns SSDs allow this complication quickly disappears. You won't find many Android apps that will cache 8MB block reads to compensate for a spinning hard drive, for example.
I ran e16 and then e17 as my main desktop back in the day for a good while. I'm sorry but what we had back then was nowhere even near what I'm talking about.
What do we have today that we didn't have back them in term of bare desktop support?
I mean we have larger resolution support amd scaling for hidpi, better/faster indexation, better touchpad support. Can you name anything else? Localization hasn't progressed that much, I remember already being able to select some barely spoken dialects on linux 20y ago?
NeXTSTEP 3.1 ran fine at 1152x832 4 shade mono with 20MB of RAM. 32MB if you were running color.
It was also rendering Display PostScript on a 25Mhz '040. One of the first machines in its day that allowed you to drag full windows, rather than frames on the desktop. High tech in action!
You could also do that in '92-ish on RISC OS 3 running on a 1MB Acorn Archimedes with 12MHz ARM2 processor, with high quality font antialiasing. Those were the days!
> Hasn't it, though? HDR, fluid animations, monstrous resolutions, 3D everything, accessibility, fancy APIs for easier development allowing for more features, support for large amounts of devices, backwards compatibility,
Soo the feature windows 7 had? I remember running 3D desktop with compositor and fancy effects on 1GB RAM laptop on Linux...
Please don't miss the malware within the OS itself: license services for software such as Microsoft Office and Adobe, and other applications without enough resource bounds.
It is still possible to have a snappy computer experience. Go Linux, use a very configurable distro (Arch, Gentoo, NixOS), choose a lightweight DE and app ecosystem and it will get you there for the most part.
Browsers are still going to be the sticking point, but with agressive adblockers/noscript and hardware that's not terribly old (NVMe storage is priority 1), and you should be set.
But of course, snappiness isn't free and you have to spend some time doing first time set-ups and maintenance.
I’ve got 16 gb of ram and the browser is using most of them. I can literally see the swap space emptying when i have (as in “im forced to”) sacrifice my browsing session (xkill the browser) due to constant swap out to disk.
And I’m using a pci gen 3 nvme disk, and already lowered swappiness.
At this point, my primary use case for ad blocking isn't the ad blocking itself, it is 1. the security of blocking ads, one of the worst vectors for attacks in the while and 2. the greatly reduced system resources my browser uses. The ad blocking itself is a further bonus.
I'd suggest again to try NoScript/Adblocking, disable hardware accel if you have it enabled, enable it if disabled.
If even there you have no success, I'd suggest you try something like EndeavorOS. Browsers have issues but that is not normal. You're not using Debian stable on the desktop, right?
> Let’s pause for a bit and dwell on the absurd amount of RAM it takes to run it even after this exercise.
I agree and I find the apologists to be completely wrong. I run a modern system: 38" screen, 2 Gbit/s fiber to the home. I'm not "stuck in the past" with a 17" screen or something.
The thing flies. It's screaming fast as it should be.
But I run a lean Debian Linux system, with a minimal window manager. It's definitely less bloated than Ubuntu and compared to Windows, well: there's no comparison possible.
Every single keystroke has an effect instantly. After reading the article about keyboard latency, I found out my keyboard was one of the lower latency one (HHKB) and yet I finetuned the Linux kernel for USB 2.0 polling of keyboard inputs to be even faster. ATM I cannot run a real-time kernel because NVidia refuses to modify a non-stock kernel (well that's what the driver says at least) but even without that: everything feels and actually is insanely fast.
I've got a dozen virtual workspace / virtual desktops and there are shortcuts assigned to each of them. I can fill every virtual virtual desktop with apps and windows and then switch like a madman on my keyboard between each of them: the system doesn't break a sweat.
I can display all the pictures on my NVME SSD in full screen and leave my finger on the arrow key and they'll move so quickly I can't follow.
Computers became very fast and monitor size / file sizes for a regular usage simply didn't grow anywhere near close as quickly as CPU performances.
I love this comment for getting at what, in my opinion, Linux on the desktop is all about: spending your time with a computer that just plain feels great to use.
It doesn't look the same for everyone, of course. It's not about some universalizable value like minimalism. But this is a great example of one of the dimensions in which a Linux desktop can just feel really great in an almost physical way.
The low-end requirements for Debian GNU/Linux (assuming a graphical install and an up-to-date version) are not that low. They're higher than the low-end for Windows XP when it first came out, and probably close to the official requirements for "Vista-capable" machines. So yes, it's a very efficient system by modern standards but it does come with some very real overhead nevertheless.
VIsta capable wasn't that capable. It required 1GB of RAM to run well. Debian with ZRAM and light DE could run with 512MB of RAM and Seamonkey + UBlock Origin with patience.
Could you explain why any of the things he says make you think a number that high? I'm just finishing building my first PC ever (I've used computers for ... 20 years? But never actually built one). And I have a 1TB NVMe SSD from Western Digital, it was about 60 bucks. I have a 35" BenQ monitor from work, I think it was around $600 at the time of purchase. I don't have fiber at my home, but from what I understand, it's not prohibitively expensive in general. Anyway - I went with 16gb RAM. That felt like a reasonable starting point considering my current and prior daily driver were there as well. My build (minus admittedly expensive monitor) was, to me compared to the Macbooks I usually have for work, a fairly modest $1250 or so. So, roughly the same specs - seems like nothing too crazy?
Likely the fiber setting expectations, 2gbps is the "premium" tier in many places, where the monthly difference between fast and the top speed is about the same as 32gb of ram.
Personally, XFCE is pretty lightweight, customizable and stable. I actually did a blog post where I ran Linux Mint (based on Ubuntu) with XFCE, so you can get a rough idea of it in some screenshots: https://blog.kronis.dev/articles/a-week-of-linux-instead-of-...
It's not particularly interesting or pretty, but it works well and does most if not everything that you might need, so is my choice for a daily driver. Here's the debian Wiki page on it: https://wiki.debian.org/Xfce
Apart from that, some folks also like Cinnamon, MATE, GNOME or even KDE. I think the best option is to play around in Live CDs with them and see which feel the best for your individual needs and taste. Do note that Ubuntu as a base distro might give you fewer hassles in regards to proprietary drivers, if you don't care about using only free software much.
> I still can't believe that Windows has turned into such a bloatware/mess that i'm actually at a point i can't live with it anymore...
That is quite unfortunate, especially because there is some software that I think Windows does better - like MobaXTerm or 7-Zip (with its GUI), FancyZones (for window snapping) and most of the GPU control panels.
That said, as that article of mine shows, Linux on the desktop is actually way better than it used to be years ago and gaming is definitely viable, even if not all of the titles are supported. Sadly, I don't think that'll happen anytime soon, but it's still better than nothing!
I'll still probably go the dual boot route with Windows and Linux, or maybe will have a VM with GPU passthrough for specific games on Linux, although I haven't gotten it working just right, ever. Oh well, here's to a brighter future!
Well, other operating systems are still relatively decent at this. My main Linux install eats ~250 MiB of RAM after startup, and I've spent exactly zero amount of time on that, so it can be trimmed down further. That's on a system with 32 GiB of RAM — if you have less RAM, it will eat even less since page tables and various kernel buffers will be smaller.
FreeBSD can be comfortably used on systems with 64 MiB of RAM for solving simple tasks like a small proxy server. It has always been good at this — back in the day cheap VPS often used it (and not Linux) precisely because of its small memory requirements.
Today's version of icewm takes around 16mb of memory, xorg will add a bit to it.
There are smaller window managers but I choose this one as an example as it gives a similar experience to the windows xp of olds.
I have done the experience on slimming as much as possible a desktop. But once you start a web browser with more than 3 tabs memory usage goes through the roof. In the end if you want to run an old system with 512mb of ram you are kind of forced to use the web sans javascript and images. You are almost better off using links or w3m and tui apps for everything. Netsurf can work too if you are limiting the number of tabs open.
One a 1GB system you can definitely use a modern web browser but you definitely need the ad/trackers removal extensions and have to take good care of not opening more than 2-3 tabs or you will start swapping a lot.
I've worked on several projects where performance was an afterthought. After the product scaled a bit, it suddenly became the highest priority - but at that time, it was impossible to fix. At least for everyone that created the problem to begin with.
I've taught high performance data structures to dev teams. I've tried to explain how a complex problem can sometimes be solved with a simple algorithm. I've spent decades on attempting to show coworkers that applying a little comp-sci can have a profound effect in the end.
But no. Unfortunately, it always fails. The mindset is always "making it work" and problem solving is brute-forcing the problem until it works.
It takes a special kind of mindset to keep systems efficient. It is like painting a picture, but most seem to prefer doing it with a paint roller.
And I've worked on systems where months were essentially squandered on performance improvements that never paid off because we never grew the customer base sufficiently for them to be worth while...
I'm all for dedicating time and effort towards producing performant code, but it does come at a cost - in some cases, a cost of maintainability (for an extreme example there's always https://users.cs.utah.edu/~elb/folklore/mel.html). In fact I'd suggest in general if you design a library of functions where obviousness/clarity/ease-of-use are your primary criteria, performance is likely to suffer. And there are undoubtedly cases where the cost of higher-grade hardware (in terms of speed and storage capacity) is vastly lower than that of more efficient software. I'd also say performance tuning quite often involves significant trade-offs that lead to much higher memory usage - caching may well be the only way to achieve significant gains at certain scales, but then as you scale up even further, the memory requirements of the caching start to become an issue in themselves. If there were a simple solution it would have been found by now.
Performance is not the same as efficiency, and efficiency can't be solved with more hardware.
Let's say I build a sorting algorithm that is O(N^2) complexity and works fine for small inputs (takes <1 millisecond), but it is going to be used for large data systems. Suddenly it takes hundreds of thousands of hours to sort the data.
One of the corps I worked with went full scalability in their architecture. One-click deployments, dynamic scaling of servers, rebalancing of databases, automatic provisioning of storage. They were handling 40-50k requests pr. second with their 15-ish large server farm, which could sale down to 5 servers, or up to 50-ish before it began to wobble.
I got called in because the company had gotten a large client that needed 100k requests pr. second. They tried scaling the system to fit the need, but the whole thing got unstable and their solution was "more operations people to manage it".
I built a custom solution for the backend. Took about two months. The new system could do about 2100k requests pr. second on one server. Scalability of the new system was ~90% efficient as well, so lots of capacity for the future.
None of their developers understood computers or the science behind them. They were all educated and experienced developers, but none of that were applied to the problem. They were just assembling parts from the hardware store until something worked, and the resulting Frankenstein's Monster was put into production.
I'm struggling to believe any single server could usefully service 2100k (well over 2 million!) requests per second. Even Google, with their vast farm of servers, reportedly only process less than 100k requests per second globally. I've certainly read of servers capable of handling in the order of 1000k requests per second as a benchmark, but the requests are usually pretty trivial (the one I saw literally did no input processing at all, and just returned a single fixed byte! But was written in Java, surprisingly.)
At any rate, I would think a tiny % of real-life systems actually need to be able to support that sort of load, and bringing in somebody to do the scalability work once it's clear it's needed seems like exactly the right strategy to me.
Not serving, but handling 2100k requests. Your skepticism is rightly placed, as the HTTP protocol is yet an example of an inefficient protocol that nonetheless is used as the primary protocol on the internet. Some webservers[1] can serve millions of requests pr. second, but I'd never use HTTP in code where efficiency is key.
No, I'm talking about handling requests. In this particular case, requests (32 to 64 bytes) were flowing through several services (on the same computer). I replaced the processing chain with a single application to remove the overhead of serialization between processes. Requests were filtered early in the pipeline, which made a ~55% reduction in the work needed.
Requests were then batched into succinct data structures and processed via SIMD. Output used to be JSON, but I instead wrote a custom memory allocator and just memcpy the entire blob on to the wire.
Before: No pre-filtering, off-the-shelf databases (PSQL), queue system for I/O, named pipes and TCP/IP for local data transfer. Lots of concurrency issues, thread starvation and I/O bound work.
After: Agnessive pre-filtering, succinct data structures for cache coherence, no serialization overhead, SIMD processing. Can saturate a 32 core CPU with almost no overhead.
My go to on this was I remember running Debian on a Pentium 166 with 32MB of RAM back in 98/99. It would boot to the desktop only using 6MB. It wasn't flash but it could handle the basics. Heck Windows XP would boot to Desktop using a little under 70MB.
But this isn't just Windows, currently I am on Kubuntu 22.04 and it is using about 1.5GB to get to the Desktop! Yes it is very smooth and flash but it seems like a bit much to do this.
This is why I am interesting in projects like Haiku and Serenity OS, they may bring some sanity back into these things.
Obviously there were huge limitations but it shows what can be done. This fit on one 170K floppy and ran on a 1.44mhz 8 bit machine with 64K of RAM.
In the 1990s I ran both Linux and Windows on less than 64M of RAM with IDEs, web browsers, games, and more.
If I had to guess what were possible today I’d fall back on the fairly reliable 80/20 rule and posit that 20% of todays bloat is intrinsic to increases in capability and 80% is incidental complexity and waste.
For me also the Commodore came to mind. It had 64K RAM and a 64K address range, because other things had to fit in there not all RAM was usable at the same time. Clock frequency of the PAL model was 985kHz (yes KILO), so not even a full MHz.
Yet, I could do
* word processing
* desktop publishing
* working with scanned documents
* spreadsheets
* graphics
* digital painting
* music production
* gaming (even chess)
* programming (besides BASIC and ASM I had a Pascal compiler)
* CAD and 3D design (Giga CAD [1], fascinated me to no end)
* Video creation [2]
For all this tasks there were standalone applications [3] with their own GUI [4]. GEOS was an integrated GUI environment with its own applications and way ahead of its times [5].
It still blows my mind how all this could work.
My first Linux ran on a 386DX with 4M of RAM, but this probably as low as on can get. Even the installer choked on that little RAM and one had to create a swap partition and swapon manually after booting but before the installer ran. In text mode it was pretty usable though, X11 worked and I remember having GNU chess runnning, but it was quite slow.
[3] Some came on extension modules which saved RAM or brought a bit of extra RAM, but we are still talking kilobytes. For examples see https://www.pagetable.com/?p=1730
[4] Or sort of TUI if you like; the strict separation of text and graphics mode wasn't a thing in the home computer era.
[5] The standalone apps were still better. So, as advanced GEOS was, I believe it was not used productively much.
But if you had to use that software now, you'd say (justly) that it's extremely basic and limited, and that interoperability with other systems is not great.
Fully agreed. When I tried my old Commodore a while ago I couldn't stand the 50Hz screen flicker for long. Unbelievable that back in the day I spent hours on hours in front of that stroboscope.
For me it's more about the excitement that the bright future lay ahead of us so clearly mixed with a slight disappointment that I sometimes feel we could have made more out of it.
Zawinski’s Law - every program on windows attempts to expand until it can be your default PDF viewer. [cloud file sync, advertising display board, telemetry hoover, App Store…]
2GB is a ridiculous amount of memory for something like an OS.
When we see egregious examples like Windows, then it's arguable having constraints might be desirable. It is well-known that "limitation breeds creativity". It's certainly true outside of "tech" companies. I have witnessed it first hand. "Tech" companies are some sort of weird fantasy world where stupidity disguised as cleverness is allowed to run rampant. No more likely place for this to happen than at companies that have too much money.
Many of them do not need to turn a profit and a small number have insane profits due to lack of meaningful competition (cf. honest work). With respect to the later, it's routine to see (overpaid) employees of these companies brag on HN about how they do very little work.
The standards were also a lot lower back then. Modern-day users expect high resolution and color depth for their screens, seamless hardware support no matter what they plug into the machine, i18n with incredibly complex text rendering rather than a fixed size 80x25 text mode with 256 selectable characters, etc. These things take some overhead. We can improve on existing systems (there's no real reason for web browsers to be as complex as they are, a lot of it is pure bells and whistles) but some of that complexity will be there.
You can achieve good memory footprints with Linux, just 2 or 3 years ago I was daily driving Arch linux with bspwm as a window manager, it used only 300 mb, for me is pretty darn good, but as soon as I opened my vscode with a JS project my ram usage was at 12gb. We have a lot of bloatware everywhere, that’s pretty sad.
edit: This remind me a some rants from Casey Muratori about VS[0] and windows terminal[1]
I remember needing to get Windows XP under 64MB of RAM so that I could run some photo editing software. XP was relatively feature complete, I don't think Windows currently ships with 32x the features of XP (64MB vs 2048MB minimum).
Linux with a lightweight GUI for example can still run okay with just 128MB. I ran Debian with LXDE on an old IBM T22, and it worked perfectly well. Running Firefox was a problem (but did eventually work), but something more stripped down like NetSurf or Dillo is blazingly fast.
Seamonkey is still around and works nicely on low spec machines (not sure about 128Mb though) as a step up from NetSurf. You get a graphical email client and news/gopher built in. Also a rudimentary Web page editor. Printing a Web page to pdf is a rough and ready way of getting rich text onto paper. The 'legacy' version of the noscript plug-in will allow selective use of javascript (saves battery and helps with security which might be an issue).
We don’t need to worry about memory efficiency until we stop getting gains via hardware improvements. For now developers can just slap a web app into some chromium based wrapper, make sure their code doesn’t have any n^2 in it and you’re good to go.
Tell that to the person on a fixed income who has to invest in an expensive new machine because their 2015 laptop (which still has a whopping 4 GB of memory and a CPU that would have been top-of-the-line twenty years ago) has become unusably slow.
Software efficiency is a serious equity and environmental issue, and I wish more people would see it that way.
This is why I argue that one of the best things that the Free/Libre software developer community can start doing is optimizing for lower spec machines. Microsoft and Apple are either too closely nit or drectly provide hardware to be prolonging the liftime of hardware they sell. In optimizing open OS's it can prolong the life time of hardware by a significant margin and it means that lower in come folks are not left in the dark. I don't just mean in well off countries, but if you are in the lower classes of the global south - there is no other option.
These was (is? - Not sure) a version of Firefox for PowerPC MaxOSX - TenFour Browser - that brought forward modern features/support of Firefox to Macs that were long past their prime. They mentioned that their favorite story in time of development was "One of my favourite reports was from a missionary in Myanmar using a beat-up G4 mini over a dialup modem; I hope he is safe during the present unrest. "
This is what can happen when things are optimized for the people, not the business. This is part of why I still use a Core 2 Duo as my daily runner, if it ain't broke don't fix it.
>This is why I argue that one of the best things that the Free/Libre software developer community can start doing is optimizing for lower spec machines.
But isn't the primary application for these machines going to be the web browser, which is pulling in so much JS insanity that the web sites won't render well anyway?
To be fair, if you forced programmers to write efficient code you would just make everything more expensive and flood the market for unskilled labor with university graduates that can't find their own ass.
If it really did come down to that, I would still rather people had to pay more for software and less for hardware, because software has a comparatively minuscule environmental impact.
Actually no. If programmers actually learned how to properly program the machines, we'd not be in the mess we are in right now. Abstraction is the cancer that got us to where we are.
Nobody has any actual clue what they're doing, everyone keeps writing code for the compiler hoping for the best and the rest of the world has to buy new machines because the programmers of the last decades sucked.
That, btw, includes most of you people reading this. You're fucking welcome.
No need to invest into an expensive new machine; a device from 5 years ago, with some more added RAM, would already be pretty adequate. Typing this from a Thinkpad T470 which was introduced in 2017, which is my main workhorse machine.
A top-of-the-line laptop CPU from 20 years ago likely just doesn't support addressing more than 4GB or RAM. Forcing it to work on modern resource-heavy Web pages and media is like forcing a GPU from 20 years a go to run Skyrim. It's just not adequate.
20 years ago is pushing it a bit. But 12 years ago, in 2008, I used a computer with 4GB of RAM in order to:
• Read the news
• Post on social media
• Make video calls
• Use instant messaging
• Create and edit word documents/presentations/spreadsheets
Today I use my computer for all of those same things... and yet they all require drastically more memory (and CPU, GPU, etc). What happened, and how does this benefit consumers? Yeah, modern web pages are resource-heavy—but to what end†?
In some cases, the requirements really did change. For example, I can now watch videos in 4K; my 2008 computer could handle 1080p, but I imagine it wouldn't have handled 4K as well. However, I suspect many users of old machines would be perfectly happy to drop down to a lower resolution.
---
† Something I find amusing in all this... people often say they're glad Flash applets died because they were slow. Nowadays, instead of Flash, we use browser apps written in Javascript. I wonder how "slow" those apps would run if you threw them on a computer from the Flash era. (This isn't to discount other problems with Flash, although I do think it has a worse reputation than it deserves.)
You can use computer with 4 GB of ram today for all things you've mentioned. It might swap here and there and not be as snappy, but generally it'll work.
I think that Apple just recently stopped to sell 4 GB computers. And their phones from the last year sells with 4 GB RAM while being perfectly able to do all the things you've mentioned as well.
Yeah, I agree - I don't think ram is usually the problem.
I used to have a 2016 dual core macbook pro with integrated graphics and 8gb of RAM or something. The machine was great when I got it, but 18 months ago it was limping along and I finally decided to get rid of it.
And it wasn't any 3rd party apps that killed the machine. Every time the machine started up, iphotoanalysisd or some random spotlight service or something would be eating all my CPU. It was always a 1st party Apple app which was making it slow. And the graphics felt laggy. Just moving windows around felt bad a lot of the time, even when I didn't have anything open. Xcode would sometimes lag the machine so much that it would drop keystrokes while I was typing. I had RAM to spare - it was a CPU problem.
In the process of wiping the machine, I booted into Recovery mode and it booted the 2016 recovery image of macos. Holy smokes - the graphics were all wicked fast again! I spent a couple minutes just moving windows around the screen in recovery mode marvelling at how fast it felt.
I wonder if reverting to an old version of macos would have fixed my problems. As far as I can tell, this was all Apple's fault. They piled up macos with so much crap that their own computers couldn't cope with the weight. I also wonder if they broke the intel graphics drivers in some point release somewhere along the way, or they started relying on GPU features that Intel's driver only had software emulation for.
Modern macos still has all that crap - the efficiency cores in my M1 laptop are constantly spinning up for some ridiculous Apple service or something. But at least now that still leaves me with 8 P-cores for my actual work. Its ridiculous.
I bet linux would have worked great on that old laptop. I wish I tried it before turfing the machine.
While I do agree with this, it seems worse than that - I've observed with a number of systems that used to run well 5 or so years ago that they simply don't any more, even with exactly the same OS and essentially the same software.
I don't know to what degree that is because of actual hardware deterioration (or least, file system fragmentation), vs additional gumpf getting automatically installed and slowing it down (but every time I've tried to remove such gumpf, it hasn't really helped), or even because of user perception (but I don't buy that this explains cases of apps that now take over 30 seconds to start up, when they used to take 5 at most). I have one 8+ year old Windows 7 machine in particular that I use for music streaming, and basically can't be used for 30 seconds at least after logging in - but then seems mostly fine after that.
"Windows Rot" is definitely a thing but it can be cleared out by doing a clean reinstall of the OS. While this can be time consuming, you'd likely be doing it anyway if you got a new machine.
No idea where I'd even find an installer for Windows 7! It does make me wonder whether upgrading it would actually help. But for now it works well enough I'd rather not risk it (the other thing I use it for is some old software that requires a FAT partition for its licensing to work!).
Why? Are the types of things I want that laptop to do different today than they were 8 years ago? Sure, apps and websites are heavier, but I'd posit the things most people do on their computers haven't changed in a decade at least.
> That has never been a reasonable expectation in the history of computing.
Yes, but again, why? As I see it, everyone has been conditioned to this lie that computers naturally slow down over time, because that's the way it has always been relative to the speed of current software. Originally, that was for a good reason—I'm glad programs now use full-color GUIs. But now?
What would actually happen if Moore's law ended tomorrow, and we were no longer able to make computers faster than they are today? I suspect that a (slim) majority of computer users would actually benefit. Not hardcore gamers, not scientists, and certainly not software developers--some people really do need as much performance as they can get. But for the people who just need to message friends, write documents, check email, etc., the experience would be unchanged—except that their current computers would never slow down!
I absolutely agree. It seems like most software developers only start optimizing code once our software starts feeling slow on our top-of-the-line development machines. As a result, every time we get faster computers we write slower code. When the M1 macs and the new generation of AMD (and now intel) chips came out 18 months or so ago, I spent big. I figured I had about 2 years of everything feeling fast before everyone else upgraded, and all the software I use slowed down again.
Years ago while I was at a startup, I accidentally left my laptop at work on a Friday. I wanted to write some code over the weekend. Well, I had a raspberry pi kicking around, so I fired up nodejs on that and took our project for a spin. But the program took ages to start up. I hadn't noticed the ~200ms startup time on my "real" computer, but on a r.pi that translated to over 1 second of startup time! So annoying! I ended up spending a whole morning profiling and debugging to figure out why it was so slow. Turns out we were pulling in some huge libraries and only using a fraction of the code inside. Trimming that down made the startup time ~5x faster. When I got into the office on monday, I pulled in my changes and felt the speed immediately. But I never would have fixed that if I hadn't spent that weekend developing on the raspberry pi.
Since then I've been wondering there's a way to do this systematically. Have "slow CPU tuesdays" or something, where everyone in the office turns off most of our CPU cores out of solidarity with our users. But I'm not holding my breath.
I've never expected my computer to run worse over time. There's no real mechanism for that to even happen; it works fine until it fails completely.
Recently it's become less possible to run the same software for 10+ years because so many things are subscription only and have unnecessary networking, which makes it necessary to patch security flaws, and then you have to accept whatever downgrade the vendor forces on you.
Older applications that you used to be able to just install run just as well as they did the day they came out on the hardware available at the time. The idea that computers "get worse" is entirely a phenomenon of the industry being full of incompetence. Even (or perhaps especially) programmers at FAANG companies are just not very good at their jobs.
Check out the argument Casey Muratori got into with the Microsoft terminal maintainers about how slow the thing was. He got the standard claims about how "oh it's so complex and Unicode is difficult and he's underestimating how hard it is", so he wrote a renderer in a few hours that was orders of magnitude faster, used way less memory, and had better Unicode support.
There is (or at least was) some truth in computers getting worse over time.
File system fragmentation was a very significant problem when most people still used HDDs as their primary mass storage media. SSDs are far less affected by fragmentation because of much faster random access times, but HDDs and thus performance suffered.
The Windows Registry is an arcane secret not even Microsoft fully comprehends at this point, and it can get very messy if a user installs and uninstalls lots of programs frequently. This is, of course, a problem with uninstallers not uninstalling cleanly and not a problem with Windows or the users. With so much crap moving to Chrome online-software-as-a-service outfits, users aren't (un)installing as many programs as frequently anymore, but an unkempt Windows installation can definitely slow down over time.
Software in general also just gets more and more bloated as the moons pass. More bloated software means less efficient use of hardware, meaning less performance and more user grief over time.
I have a netbook from around 2010. It has 2 GB of RAM and a single core Atom processor. It boots to a full Linux GUI desktop in a minute or so. It can handle CAD software, my editor, and my usual toolchain, if a bit slowly. It even handles HD video and the battery still holds a 6 hr charge.
But it doesn't really have enough RAM to run a modern web browser. A few tabs and we are swapping. That's unusably slow. A processor that's 5 or 20x slower is tolerable often. Working set not fitting in RAM is thrashing with a 1000x slowdown. And so this otherwise perfectly useful computer is garbage. Not enough RAM ends a machine's useful life before anything else does these days, in my experience.
That's fine for those desktop users which don't care about spinning fans, but many users are on laptops, and care about battery life. An inefficiently coded app might keep the CPU in high levels even if it's absolutely not required for the app because it is just a chat app or such.
> For now developers can just slap a web app into some chromium based wrapper […]
making 10% of users unreachable in order to more easily reach the other 90%. yeah, it’s a fine business strategy. though i do wish devs would be more amenable to the 10% of users who end up doing “weird” things with their app as a result. a stupid number of chat companies will release some Electron app that’s literally unusable on old hardware, and then freak out when people write 3rd party clients for it because it’s the only real option you left them.
DRAM density and cost isn't improving like it used to.
Also memory efficiency is about more than just total DRAM usage; bus speeds haven't kept pace with CPU speeds for a long time now. The more of the program we keep close to the CPU -- in cache -- the happier we are.
You are getting a whole runtime and standard library bundled in. The whole point of python is for quick and dirty scripts because saving you 4 hours is worth more than using 20mb less ram for something that gets run a couple of times.
early expectations on code interfacing and re-usability failed catastrophically
in my previous job rather than give people root access to their laptops we had to do things like running a docker image that ran 7zip and we piped the I/O to/from it, and I'm not kidding we all did this and it was only bearable thanks to bash aliases and the fact that we had 16GBs of RAM
This removes WinSxS. That's fine for embedded, since you'd just package the DLLs you need with any executables you want to run, but trying to run this as a general purpose OS is a fools' errand. Calling WinSxS "bloat" when that "bloat" is allowing 30+ years of backwards compatibility (and a lot of stuff will break) is creative by the article's author for sure.
Nothing wrong with Tiny11 though, if you know what it is good at and use it for that. Namely, "offline" Windows for some appliance-like usage (e.g. factory controls, display screens, et al) when Linux won't do for whatever reason and licensing Windows IoT isn't possible (small business/personal project/etc).
The idea that removing WinSxS saves space is generally misguided anyway. The vast majority of content there is actually the original file that was used to create a hard link at the destination. So obviously removing the file doesn’t really save any appreciable amount of space.
The remaining content unique to WinSXS is either for cryptographic validation, app compat, or the driver stack.
WinSXS looks like a huge folder in explorer, because explorer's size estimates do not tell you about hard links. It's not that big. I need to question somebody who thinks removing it will remove a lot of bloat.
The amount of backups in there is fairly minimal and is limited to a folder called “backups”. As for disabled features, the size of disabled features there is also relatively small, not gigabytes.
Any space reclaimed using dism’s startcomponentcleanup is only from removal of superseded updates which normally happens automatically whenever the maintenance task runs after a certain period of time.
Note that I explicitly consider backups separate from superseded updates. Superseded updates are kept for a period of time to allow the user to uninstall a newer update.
Get the size of every “file” in the volume along with the file id of each and then subtract the size of any files with a matching file id that are in the WinSxS. Then sum the size of the remaining files that were from the WinSxS.
You could also probably execute “fsutil hardlink list” for every “file” in WinSxS and then ignore any that list more than one result and sum the size of the remainder.
There are of course more efficient ways to do this but those are some quick hacks.
it does more than that, it keeps a copy of every dll you've ever installed not just the ones in use. there's a reason it just gets bigger forever even if you don't install more programs
how much backwards compatability do i neee nowadays?
my laptop only needs to run a few things:
browser
vscode
steam
the microsoft drawing app
some office stuff
sublime
discord
which all update pretty regularly.
the age of the desktop app has been replaced by the age of the browser and electron based apps. i can imagine businesses who built their own set ups back in the age of the desktop app being stuck with it, but for the most part i dont think i used windows' backwards compatability anymore
They may update regularly but that doesn't mean they only use modern features. E.g. even just Steam itself (not just games in it) is largely still 32 bit on Windows requiring gigabytes of 32 bit compatibility files using interfaces going back decades even though Windows 11 itself doesn't have a 32 bit version anymore.
Because integration into the desktop is better as an Electron app. Eg sound and video calls, keyboard shortcuts, not having to worry about finding your Discord/Slack/whatever tab
Electron apps can use desktop capabilities. Web apps are at the mercy of the few desktop-bridging APIs that browsers inconsistently expose. They’re not talking about UI/UX “integration”.
Discord for instance has this “currently playing game X” feature. I have zero interest in broadcasting what I’m doing at the moment to the world, but many do and have this feature enabled. Good luck implementing that in a browser-confined web app.
I think it's because the apps have additional functionality and because the services push users to use the apps on their websites. Some of the additional functionality is artificially limited to apps as companies can put more tracking, advertising, and can ensure that people won't leave their service easily by just closing a tab.
Because browser's haven't built enough compatibility with the desktop to use it like a regular app, therefore severely limiting---sometimes intentionally---what you can and can't access on the file system. It is expected sometime in the near future that browsers will have enough sandbox protection that they will then enable app developers to do the same things that only Electron allows but without the excessive bloat you get from Electron versions.
Isn't it interesting that both you and I frequently use Sublime and VS Code? Why can't VS Code kill off Sublime? It's interesting to me where a text editor like Sublime can't be a preferable IDE, but an IDE also isn't a preferable text editor?
I have the same feelings. Sublime has really good, “let me visually manipulate text through cursors” functionality. The find all, regex highlighting, and multi cursors are really nice. These features are in nearly every editor but they always feel crappy to use compared to sublime.
I’m starting to get back into emacs recently though because I like fiddling with tools more than productivity.
I've migrated to vscode from sublime. Vscode integrated debugger, lsp, etc are way ahead of sublime. That said, typing in sublime just feels good compared to vscode. I keep sublime around for quick edits.
"De-bloated" implies that the stuff removed is "bloat", i.e. worthless. I wouldn't assume that a "de-bloated" install was any less suitable for general purpose computing tasks.
I'd consider that a poor assumption. If you try to use this to install a wide range of applications you run into one of two issues, you rebloat the system, or some things fail to run.
I have a media pc running windows 10 with 2gb ram. It's run great with media player classic, Netflix, and even steam installed. I certainly would not assume "debloated" means "completely crippled"
I think that's the point - some people have assumption that 2g is meaningless whereas others see it as HUUUGE amounts of memory. Never mind historically, let us consider what a modern phone can do with 2gb ram.
Calling WinSxS "bloat" when that "bloat" is allowing 30+ years of backwards compatibility (and a lot of stuff will break) is creative by the article's author for sure.
Taking up a lot of space on your drives for data to maintain backwards compatibility makes sense. Why, when not being actively used, does it need to occupy gigabytes of RAM?
Given that the premise of this discussion is how Tiny11 credits removing WinSXS as part of the reason they were able to free up the memory, it would appear the article (and OP) disagree.
The article talks about memory savings and storage savings. Removing WinSXS is a storage savings play, not a memory savings play.
Here's the relevant quote:
> Moreover, removing the Windows Component Store (WinSxS), which is responsible for a fair degree of Tiny11’s compactness, means that installing new features or languages isn’t possible.
It really staggers me the lengths some people will go to try and preserve something that is actively against them, when there are alternatives right there.
I'm not saying Linux is for everyone, but the kind of people creating and running these scripts really should have no issue daily driving Ubuntu or even Arch. Or if they desperately need photoshop or whatever, get a mac.
It's like watching people constantly go back to an abusive relationship.
It really staggers me the lengths some people will go to try and preserve something that is actively against them, when there are alternatives right there.
The same can be said for those working on jailbreaks and the M1 Linux project, as well as all of the cracking/hacking scene. For some people, it's far more interesting and enjoyable to fight --- and possibly win --- than just "abandoning ship".
Well, maybe I'm a minority for having a EE/physical science hobby, but also belong to the kind of people you are referring to.
I'm pretty stuck to Windows as I need it to drive my home lab. I need to run Windows to
1. Get data from a old optical spectrometer. It was designed for optical endpointing of plasma etching. And one will have a hard time finding anything that is not running Windows in a fab (except lithography).
2. Run a 28 years old piece of software to acquire timestamps from a HP 53310A modulation domain analyzer
3. Grab frames from an old xray detector
4. Work with two NI DAQ cards. Yes, they are supposed to work on Linux, but I always get weird errors on my Ubuntu work computer while they never failed me on my Windows laptop.
5. Use Autodesk Inventor to prepare files for 3D printer/machine shop. Siemens NX used to work on Linux, but apart from that, there is not a single piece of non-toy 3D CAD software that I'm aware of support Mac or Linux.
6. LTSpice simulations and Altium Designer layouting.
Windows is the only first class citizen in many areas, software development and artistic work are two exceptions.
And so far, it seems I can still always be one step ahead of MS in the anti-consumer war, so I'm not too worried.
> Or if they desperately need photoshop or whatever, get a mac.
I'm kind of in that situation and I don't thing going with Mac and the Apple ecosystem really is better than trying to use Windows 10 as long as possible on an older Thinkpad.
Didn't really get the hang of it to phrase it that way.
Everybody who's using tools like Photoshop professionally has been "shaped" to feel well in the Adobe ecosystem. I doubt that's good but that's how it is.
Photoshop, Illustrator, InDesign, they all feel and work similiar which helps with transitioning/switching between these tools without big issues.
Now take Gimp, Inkscape and Scribus against that. Everything looks different and probably works different, too. I need to get work done, not learn three seperate programs. Also Scribus seems to be dead, latest Dev blog entry is from 2016.
Serif is doing great work with Affinity, but Adobe is still going strong and defines the professional industry. As long as that's the case we're stuck with Windows/Macs for professional work.
Agreed. My initial response to any post beginning something like "On Windows XP..." "On Windows 7..." "On Debian..." would be like: "Well you already have Windows XP/7/Debian/whatever. If you want to use that, use that. Nobody is forcing you to use Windows 11."
For the people who do want to use Windows 11, and who see it for what it is, it's pretty great. For the people who use Windows XP/7 or who stick to some minimalistic un-featured XFCE-running underpowered Linux machine, you do your own thing. No need to force that on everyone else.
debloating does not mean "making it like XP/W7" - it means ripping out the horsecrap and unnecessairy components that are both unnecessairy and a waste of space and being able to control what goes on your system - sort of like what nix allows us to do; it also means having options to turn things on and off, ect.
for the non tech savvy - windows is still a great choice for those wanting to simply game and not learn something new like linux - these are the same folks that do notice a difference in OS being bloating and see ads, while asking for help knowing others know more; which a lot of us do not have the time nor energy to fully support a vast array of friends' systems. these debloated windows are great for those folks, and for me not having to /shurg and have people buy more hdd space for nothing.
was it not linus himself that mentioned that linux as a popular desktop os will not be a thing until manufactures who provide prebuild OS's (and support them) - ship them with linux?
but in all honesty i fell that the X vs Wayland needs to be a bit more solidified, similarly with alsa/pulse/pipewire lol ; but those are different issues
For the enthusiasts who are doing the debloating, it's almost like they are gaming the system as they move from one level to another.
Twenty years ago I had already been installing Windows XP to FAT32 volumes directly to be more compatible with W9x multibooting. I didn't know anybody else doing this (some thought it couldn't be done) but every time I installed XP you can see the names of every driver as it loads during creation of the pre-installation environment. The very last two drivers are FAT32.SYS followed by NTFS.SYS. I figured Windows might have first been made functional on FAT32 but launched with the intention of total migration to NTFS for most people as seen.
In my later experimentation I found that Vista would run from a FAT32 partition but default Windows 7 would not do it very easily, simply because the WinSxS folder (pronounced win-sucks) was oversized in an insidious way.
The W7 WinSxS folder size was bigger than Vista's but it did not approach the maximum size that FAT32 can handle.
Instead it was the un-necessarily stupidly long filenames which overran the long-filename handling ability of FAT32 early when there were enough of them. Like the best engineers would never have even considered doing at the time, much less go into production.
By judiciously deleting the majority of the contents of WinSxS (but not all by any means), W7 can be run from FAT32 as well without any functional shortcomings as far as my office was concerned.
The modern approach to testing this for yourself would be to install the default W7 to a regular NTFS volume, then debloat the WinSxS folder manually, perhaps in safe mode or when booted to an alternative OS so none of the files on the W7 volume are in use at the time.
Reboot to something like the W11 USB setup media, "Troubleshoot" to go to the command prompt (instead of installing W11), then capture (back up) the debloated dormant W7 partition manually using DISM.EXE.
Then later, on a freshly formatted FAT32 drive, apply the captured W7 system, again using DISM.
Create new boot files for the newly applied W7 system using BCDBOOT.EXE.
Boot W7 while it's on FAT32 and prosper.
Works not that much faster than on an NTFS volume, but if you can reboot to Windows 9x on a multiboot system, you can search the FAT32 W7 volume blazingly faster than when the identical W7 system searches itself while on NTFS.
Now of course all of this needs to be done in legacy BIOS mode since UEFI alone is not adequate for such continued full PC performance.
I guess I could have been playing video games instead but reaching this level seemed just as rewarding anyway.
Wonder if W11 would do this.
Edit: For extra credit I already put W11 onto old BIOS PC's without any GPT, with regular MBR like it was W10.
Bypassing hardware restrictions into smaller-than-recommended NTFS volumes using DISM.
I've had access to cheap Windows for years, which is why I kept it around as a secondary OS on my desktop to get around the hassle of getting games run on Linux. Games are mostly play/finish/forget for me anyway.
But since a few years back, most games I were interested in ran perfectly fine on Linux. I haven't rebooted into Windows for almost a year now. So I think I will, instead of upgrading to 11, eventually delete it and use the second SSD to hold my games on Linux and won't look back.
I remember the days I have been building a bare metal recovery for some of our Windows systems using WinPE, imagex and Python. There was this feeling of sane people pouring into M$ to modernize the OS a little bit and cool stuff came out. But in the end, it's still the same inscrutable mess it always were. Nowadays with more and more ads and unnecessary fluff that gets in the way.
I'm not quite there but... my laptop is mostly just for gaming, with some email, chat and web browsing on the side. So I thought I'd allow Windows 10 to upgrade to 11 and see how it is. (It's not getting anywhere near my desktop!)
But... Windows 11 is just... annoying. The UI is worse than 10 in all the ways that matter to me. So I finally put Linux Mint on this laptop, and it's been pretty good. Not flawless, but really good. By default, I install and play games on Steam.
Notable exception is Anno 1800, which has a clunky multiplayer setup anyway, and just doesn't connect under Linux, but works (begrudingly) under Windows.
Northgard has been awesome, but just tonight I had a bunch of server connection issues - can't 100% blame Linux, though 15 minutes into a multiplayer game, I was dropped while the two Windows players kept playing. But it's not conclusive!
At any rate, I think for many PC gamers, Linux gaming would work, though it's still not 100% "install, join, play" for every game.
Not 11 but the "Windows 10 IoT Enterprise LTSC 2021" is significantly better if you want an (official) full fledged Windows OS without the bloat. I'm using that on the Steam Deck w/ dual booting and it's perfect
LTSC is no longer debloated. The latest version comes with the same crapware, Windows app store, "recommended" Microsoft account, telemetry that can't be turned off etc.
It's just a stable version frozen in time but heralding it as the bloat-free alternative is no longer true.
I have access to it through work and I gave it a spin recently but it's no longer what it used to be.
I have installed IoT 2021 (Windows 10 IoT Enterprise LTSC 2021 Version 21H2) like 3 weeks ago there was nothing, no App Store, less telemetry (there is some but significantly less than on the "normal" Windows versions, but I reinstalled the App Store so maybe because of that)
Thank you! I'm also a little shocked, I had no idea Archive.org was doing this. I'm starting to get the impression that there's quite a lot going on under the hood at archive.org, just reams of stuff I've never noticed ...
How you activate that? I think KMS will not work for IoT versions.
I use KMS activated non-IoT LTSC 2021 on my obsolete Surface. MS will not sell that edition to "consumers" like me, so I don't feel guilty at all for pirating it.
Windows activation is a simple thing. You need a KMS emulator, and you need to point your Windows to that. If you don't want to set up your own emulator, you can just search the internet for emulators, and point your Windows to one of those. This is what most activators do anyways. But running your own is also easy, if compiling and running a software is easy for you. I personally use vlmcsd.
I'm fully aware of the KMS stuff and how it works. But there are windows editions that are not licensed by and cannot be activated with KMS. I don't think there is any public way of pirating them apart from finding a working MAK key. No one seems to have digged into the SPPSvc and the internal working of Product Policy. (If you find something interesting that I missed, please let me know)
Unlike usual Enterprise editions, I don't think IoT Enterprise SKU will work with KMS. The only possible activation option seems to be with a PKEA key.
For a machine that never sees the internet, the IoT version runs in Deferred Activation state. So it is useful if it's for a intranet machine that never see the outside.
Just learned about this. I wish I could get some ameliorated.info scripts for that build. I might be willing to try Winders again in that case. There are some older win32 applications that I miss.
The main difference that IoT has longer lifecycle support, the activation method is different (but you don't even have to), and the IoT version is only available in english
But it doesn't really matter they are virtually the same
You can try but it probably depends on the laptop and how messy your OEM's drivers are. (they might require/depend on some component carved out of the IoT editions)
Honestly, I just gave up after I bought my last computer, installed POP OS and installed Steam and have so far been able to play all my games without a single issue, except The Witcher 3 which only required a configuration change and I was golden. I will use Windows only on work machines but on personal machines its Linux for me from now on.
So far Anno 1800 has an issue where multiplayer games only connect if I play on Windows (but single player runs flawlessly in Linux Mint.) Every other game I've played has been great. StarCraft II (in Bottles), Conan, Valheim, Northgard (so far.)
PC gaming on Linux is not perfect, but it's really damn good.
I was a little afraid when I opened up Legion TD 2 if the multiplayer would work or not, but sure enough it did, no issues! I was genuinely surprised to be proven wrong on my worries.
I use an Intel Arc A380 and on Linux it was using DXVK from day one, and therefore the performance issues Winders folks had weren’t a problem for me at all. It did get seriously better with kernel 6.2 recently though.
I had a similar experience with GrapheneOS on my pixel 6 pro.
It got multi-day battery life out of the box, which is far in excess of what Google advertises for that hardware.
Once I installed google play services (which have zero end-user benefit, other than enabling compatibility with apps that have bundled google spyware), battery life more than halved, bringing it in line with what Google claims.
I suspect anti-trust and consumer protection lawsuits would start flying around if more people realized that over 50% of their phone battery was there to support malicious bundled code.
Maybe you are Ok with paying for every app or making do with open sources ones that do not benefit from ad revenue stream. Plus, you don't need maps, pay or cast support. But many other people like this features and if they don't, isn't it great that there are working AOSP builds for Pixel 6 Pro so that they can roll their own ROMs on top of that? No need to hack like for Windows 11.
Why would maps (get lat/long from GPS chip when navigating / searching), cast and pay need to burn battery when the phone is idle?
(Also, third party implementations of maps, such as organic maps and here we go can install and run fine without impacting battery life when they are not running.)
The answer is that the actually-useful features are bundled with mandatory malware that does need to run in the background in order to implement 24/7 surveillance. That bundling clearly violates US antitrust law.
Also, I suspect most people buying > $1000 phones would be willing to pay 10’s of dollars for lifetime licenses for maps, pay and cast (which is roughly what they would cost as standalone products), especially if they were privacy preserving, and doubled the phone’s battery life.
I really think Microsoft needs to take a hard look at Windows and realize that it needs the ability to switch, install, or even decide at boot as a purpose built OS.
Take gaming for example, I pretty much only use my PC for gaming (I prefer my Mac for general purpose stuff) and there is a lot there that is really unnecessary. But where this really becomes an issue is on devices like the Steam Deck.
I installed Windows 10 on mine, used a debloat script to remove anything that was not strictly necessary for gaming, downloading games, and related tasks and I was able to get better performance and battery life for the same games than I did under SteamOS.
While I imagine that this would complicate testing of updates to support these separate purposes, it feels like Windows is trying to do to much all at once.
However I also recognize that much of what I removed is also things like telemetry that I doubt they would remove.
What debloat script did you use, and how did you decide which one to use? My experience is that there is a lot of them out there, and it's impossible to tell which ones actually do something that results in an observable difference, and what the potential drawbacks are of the things being changed.
A lot of the time I feel like you end up with having to do a lot of research for a very minor practical effect.
I used this one https://github.com/Sycnex/Windows10Debloater and yeah I had to heavily customize it and then I did need to re-enable something afterwords which I found on the github.
Basically what I did was I started with the default and then unchecked (or checked? I don't remember what the UI called for now) anything related to Xbox and the Store and I didnt have any issues.
I also did a comparison before and after and it was actually a pretty decent improvement. About a 10fps improvement over SteamOS and a Normal Windows 10.
For me the biggest incentive was being able to play xbox game pass games and not needing to worry about any compatibility issues with Proton which is why I went down that route.
But yeah your second part is very true. I feel, the impact is minimal if you are on a traditional PC. But on something with such limited resources like a Steam Deck, the difference can be going from 40 fps to mid 50's and a few more minutes of battery life.
But it isn't something I would recommend most people do. More just kinda pointing out that with the effort I think Microsoft could make a lean Windows really just by taking a look at what is actually necessary to be run for specific tasks.
I used that one as well on the machine I use exclusively to log into work through web browser. The change was incredible, battery life shot up from 2.5-3 hours up to 6 plus.
>I really think Microsoft needs to take a hard look at Windows and realize that it needs the ability to switch, install, or even decide at boot as a purpose built OS.
Possible by making your own custom image using dism. Everything debloat scripts are doing can be done before even making an ISO.
>I really think Microsoft needs to take a hard look at Windows and realize
Why though? Microsoft is a business, Windows is a product. It works well as a product, sales is good, deployments are many. Why should they reconsider the current strategy?
There really should be a penalty for gratuitous waste of resources. Energy consumption is neither free nor without adverse impacts. The era of planed obsolescence, bloatware expanding to gobble up and justify ever more hardware is over.
A fresh take on the desktop given the monstrous devices that are now available in terms of cpu, memory etc may completely redefine the footprint of personal computing
Try telling that to the users of Slack and other modern "desktop" applications that are simply web browsers with poorly-implemented UI elements attempting to mimic native UI elements (like menus).
I wish your idea took off, but the modern "developer" (even with insane amounts of funding) seems capable of only writing memory-hogging garbage.
> the modern “developer” seems capable of only writing memory-hogging garbage.
They’re perfectly capable of writing good software but actively choose not to. 1Password is the perfect example - lots of money, good engineers, a team who proved actually did write beautifully implemented native applications. Then they switched to Electron so they could avoid double-handling, and now users are faced with a laggy, buggy, janky, resource-intensive application.
It used to be the price of the hardware. Partly it still is: people won't use your software if they can't afford machines that are able to run it. But hardware gets cheaper.
It's now more about power consumption: both as price of electricity and as battery life.
But usually the "waste of resources" is "less time spent developing", that is, the users get to use some capability faster
No there should not. Laggy bullshit software is already bad enough and developing low latency user experiences is only getting harder.
There are applications that are extremely vulnerable to energy saving crap. Anything real-time is simply going to need to consume more power. Waiting 15.625 milliseconds completely breaks some applications.
You will take timeBeginPeriod(1) and friends from my dead, cold hands.
A regular Windows 11 will also run on 2 GB of RAM. The official requirement is 4 GB because it'll stutter a lot otherwise, but there is no software block against it. This article claims the de-bloated Windows 11 will run "great" on 2 GB so I guess there's that... But I have to wonder according to which definition of "great" with a lack of benchmarks compared to the official requirements...
Microsoft should just have an option for minimum base install and on demand feature download from cloud. Can't be the only option, because some devices need to be fully featured online, but otherwise wouldn't cost them anything, not even ability to nag people to try more of their freemium stuff. While the advantage for Microsoft would be access to low end Chromebook market while maintaining decent user satisfaction. Also, fewer components on each device means faster and less annoying updates / fewer embarrassing high profile hacks through exploits in components that the customer didn't need. Can also come up with a way to periodically purge components unused for extended time.
Except your options in Windows 11 are "Buzzfeed style widget, remove all sensible taskbar configuration, add extra steps to any context menus, insert ads for OneDrive, Office, and Candy Crush at every opportunity!" Who would add them? :)
But seriously, the way Windows Server handles it is just great. Windows 11 could potentially have a more minimal install.
Something like this would've been awesome a decade ago for Windows 8 when I was doing testing on netbooks with poor hardware.
I'm on Windows 10. After running O&O ShutUp10, this OS has been as good as Windows 7. I can do all my software dev, gaming, video editing/graphic design, etc. on this operating system without issue. I don't think it's ever crashed on me. It genuinely makes me curious what kind of issues people are running into. After all these years being a power user of Windows, so far it's been a smooth ride. The last time I had trouble was with Windows XP and Windows ME before that. I skipped Vista and Windows 8 (except for working on netbooks at Intel back in the day).
I tried Windows 11 in October 2021 and it was an awful mess. Tried too hard to appeal to MacOS and Linux users when that ship has long sailed. Not sure what state it's in now, but I've got no plans to upgrade until I either buy a new device or until Windows 10 gets some serious security vulnerability that's not on Windows 11.
My biggest issue is disk bloat, I use many vms on my laptop, windows vms regularly end up 60+gb for a pretty standard deploy (windows+msvs or office). It normally comes down to a bloated winsxs folder, but I feel like attempting to fix that is playing with fire.
Compare that to Linux, 10gb with a full gui, and minimal bloat after the fact and it’s extremely frustrating.
Reminds me of the pain point I have with Windows Gaming PCs!
Both my teenagers have 2x 1TB NVME drives installed to deal with the insane requirements of Steam, Epic and Xbox gamepass games.
We live in Australia with ~25Mpbs download FTTN, so installing and uninstalling is a huge pain and isnt practical.
I have a similar issue with Apple selling Macs with 256GB hard drives!
Even with iCloud photo and docs offload, these Macs are close to useless as you'll constantly bump into storage issues.
Doesn't help that Epic Games has three instances of Chromium. Add that to Steam's and all of the games' instances and you've easily gotten at least 2 GB of duplicate instances of Chromium. Edge and Edge WebView, at least, are hard-linked, provided that they're the same version.
Apple cloud and Onedrive build conflicts of interest resulting in kneecapping local storage.
Steam is another beast, you could consider a steam cache server or similar, or alternatively teaching your kids how to xfer unused games from primary storage to secondary, and drop a 6-10tb drive in each machine.
Software desperately needs a concept of limited scope and finished version + maintenance. Continually adding features and complexity may make work for software developers but I think at a certain point we enter the negative utility territory.
Are the enterprise versions of Windows 11 also filled with bloat?
I got some keys second-hand for Windows 10 Enterprise LTSC a few years ago, installed it on some ten year old hardware at the time, and I was honestly surprised how responsive Windows could be absent (to the best of my knowledge) the telemetry software, Cortana, etc., and how fast it could boot. It's almost like the true blue good Windows experience without all the nonsense is secretly reserved for only business customers and pirates.
This version is bloat compared to superlite 'divinity' (1.5GB iso capable of running smoothly on 1.5GB RAM, not serviceable) and x-lite 'resurgence' (2.5GB iso - fully upgradeable back to standard). Some quirks here and there but less clicks and less hassle than anything official.
Well, Alpine Linux actually works with backward compat (as glibc can be easily installed) and uses a few dozens MiB of RAM.
FWIW, I use Alpine Linux on my pinephone in the form of postmarketOS (an Alpine derivative) with a full-fledged KDE desktop, running Firefox alongside. IOW., you can use it as daily driver just fine, just need to install the respective packages - which naturally makes it use more resources, but even then far from what Windows will use.
Well, to be honest, I've been daily driving alpinelinux for quite a while, first on an x86 desktop and nowadays on my apple silicon macbook.
In my experience alpine is a good fit from anything between an OCI container all the way up to a full fledged desktop or your server. But that's not all, you can have it running on your rpi or even your smartphone as architectures like arm are really a first-class citizen something relatively uncommon even with popular distros like arch which has only a fraction of its packages available for different architectures.
Alpine may come pretty bare-bones by default but don't let that fool you, its more than capable for at least anything a regular distro is if you know what to do with it. Even if you're a casual linux user you can get it setup in no time by using the setup-* commands that it ships with eg setup-desktop which takes care of setting up a desktop environment without you having to worry about dbus,seatd,compositors or things like that. Also their repositories are filled with almost any package someone would need and can always be coupled with complementary package managers like nix and flatpak in cases where apk isn't enough.
I love alpine and the aforementioned causes do justice a fraction of reasons, especially when considering things like running on a much leaner and modern c runtime musl instead of glibc, being systemd free and having a minimal dep, bare-bones/ bloat-free philosophy as it was originally intended for use on constrained embedded devices like routers. Its one of, if not the best distros available in my opinion, amongst nixos and gentoo which I deeply respect as well. That being said, one has to factor in the drawbacks some of these features like systemd-free and musl imply when assessing combatibility but I'm having trouble remembering cases where I've ran into deadlocks even on exotic setups like alpine on aarch64 architecture running natively on a M1 macbook with a custom kernel like asahi-linux or a sdm845 oneplus 6T smartphone with pmOS.
Alpine worked great in my netbook with 2GB of RAM. Advanced browser support, MESA 22 with GL 2.1 for the old iGPU, Libreoffice... everything basic ran faster than any typical distro.
Alpine with XFCE + dhpcdp-ui as a "WiFi seeking menu" would run circles around Windows 11 using 1/10 of the same RAM. With Bluetooth support with Blueman and everything.
Not really the same. This Windows is more like an experiment, like Bellard's Linux in the browser. If you'd like to use a less annoying Windows as a daily driver, LTSC is the way to go.
The post mentions that the Windows App Store is still available and can be used to get required apps as needed, but I don't think this is really correct. Component activation plays a huge role in "modern" UAP or WinRT-enabled applications and without WinSxS, I'm not sure how much of component activation will work.
Obviously basic component activation is functional otherwise the shell wouldn't function (my biggest problem with WinRT/UAP: its insidious creep into the OS "internals" rather than just powering apps, widgets, add-ons, whatever on top of the base system), but I'm not sure how many apps you might pick at random from the app store will still work.
I feel like a new law needs to be named after this, maybe it already exists and I am just unaware of it, but 'basic computing functions and operating system requires expand to match the standard level of computing performance available"
It's crazy how things that were considered basic many years ago so run well within the performance we have in modern systems, yet the basic system has requirements much higher than it used too.
Teams is a massive example of this, is just text chat and video conferencing, stuff that was easily done 10 years or more ago, yet there are plenty of systems available today that run it like crap, let, alone imagining to run this on a ten year old system.
There's Andy and Bill's law: "what Andy giveth, Bill taketh away". Reflecting on the fact that software will eat whatever resources are available.
In a way, I think this reflects how life works in general. The way I see it, life expands until there's significant hindrance, or resources are exhausted. I don't mean it in a cynical way, like how Agent Smith does in the Matrix, regarding humanity, I just think that this is the nature of life in general.
Rant: Windows, sadly, seems to be moving more and more into the direction of the user being the product not the customer: you get spied upon, you get ads, you are subjected to changes to the deal without you ability to object. The only exception seem to be the enteprise versions. Darn it, we are paying for the bloody thing!
What's wrong is not to realize that these are businesses, not endeavors to create the perfect operating system. Your incentives are just not aligned with Microsoft's, simple as that.
Curious as well. Installing a Windows version that some guy has messed with is out of the question for me. Maybe he has lots of cred on the scene and I'm overly cautious, I dunno.
It doesn't take much tinfoil to imagine hardware vendors appreciating bloated OSes: these drive the user up market by necessity. If everything stayed trim and fast, they'd have no reason to upgrade every year. In exchange for this favor, plus a little more grease, the OEMs were more willing to collect the MS Tax on every PC and lock out all others.
I would LOVE to see a class action lawsuit against Microsoft for intentionally making computers obsolete over time through bloated updates. It has personally cost me thousands and it’s about damn time.
I think various Linux distributions would comfortably run in 2GB of RAM even without debloating. Until you start a web browser and open a few tabs. Then it doesn't matter how lean your OS is.
I think the two top questions lots of Windows users would like answered now are:
0- Did they also remove telemetry and similar malware?
1- Is it usable for gaming? I mean, didn't they also remove anything important among the cruft? I have memories of shrunk XP "distros" back in the day that were hacked to the point they refused to run a lot of software.
It seems updating is possible only to some extent and done by hand.
FTA: "This OS install “is not serviceable,” notes NTDev. “.NET, drivers and security definition updates can still be installed from Windows Update,” so this isn’t an install which you can set and forget. Moreover, removing the Windows Component Store (WinSxS), which is responsible for a fair degree of Tiny11’s compactness, means that installing new features or languages isn’t possible. If you install and enjoy Tiny11, we guess you will have to look out for ISO updates as major feature revisions of Windows 11 arrive."
> Moreover, removing the Windows Component Store (WinSxS), which is responsible for a fair degree of Tiny11’s compactness, means that installing new features or languages isn’t possible.
I don't know if I can consider this "bloat" removal.
I will never upgrade to windows 11. I'm almost completely free of the burden of Windows 10 anyways, 90% of the things I do I can do on Gentoo linux. It's the 10% that forces me to still dualboot to windows...
My daily driver Debian system idles at 400mb RAM usage. While writing this, it currently uses 1.2GB of RAM, with two browser windows, and about 40 tabs open.
My vastly less powerful Manjaro arm Laptop with the same setup idles at 160MB.
i used to run arch on a laptop with only 2 gb of ram and had zero problems for years, it was useful because it got great battery life
memory was basically a non-issue unless i was trying to compile a large package, i don't recall the precise baseline it had after boot but it was probably around 12-25%
Lubuntu is not really "lightweight" in my experience. A really light Linux install uses maybe 0.2 GB RAM at the desktop, which lets you do some very basic web browsing even on a 1 GB system.
As someone that has built Linux (and various BSDs) "from scratch," your statement contains the biggest contradiction/problem.
I can get a fully functionable desktop with GUI apps, generic hardware support (i.e. not locked down to my hardware), support for dynamic modules/drivers/libraries, audio, 3d-accelerated video, and more in a 300 MiB footprint (with only basic iso image compression) and runnable with 128 MiB of RAM.
Then comes in the last part of your statement: "do some very basic web browsing." The system above works just fine with a browser featuring < 201x tech, with great CSS, JS, HTML support. But if I need to build and bundle the latest Firefox, Chrome, or whatever without manually stripping out a ton of features (beyond what is available via distro package managers), that footprint triples or quadruples in size and the memory requirements skyrocket.
Sure, and you could strip an average car down to its shell, steering wheel, throttle/brake controls, a seat, and probably get away with a 75 hp engine. Neat!
Yeah, I remember using NT4 on a machine with 20MB of memory on a lab machine and thinking it was an ungodly amount. A few years later used an SGI with 256MB and thought the same. Actually needed it to flipbook a few minutes of movie resolution frames in RAM. cough
Right-click menu on files missing all the actually useful stuff is a huge annoyance for me. I think there's some registry change that somewhat brings it all back, but IIRC it doesn't always give you the full menu anyway.
I never thinned an OS since Windows XP (Black Viper, you rule!).
Why put yourself in harms' way by using an unsupported configuration? I briefly consider O&O ShutUp 11 for shutting up Windows' telemetry. AFAI read, works reasonably well, but made Edge not start after some updates.
I use Windows and Linux, for privacy concerns. If you want privacy, go for Mac (not that I'd do it). On mobile, still working out what are the best options.
I just assume this 'release' has no malware or backdoors in it. I love the work of NTDev and even follow them on Twitter, but since this is closed source there's no way of eyeballing what changes have been made. I would run this an offline sandbox VM not connected to the Internet and for trying out different Windows software, but I wouldn't connect it to The Internet.
The same could be said for official Windows builds, except we know for sure they contain malware and back doors.
For instance, Microsoft engineers have the ability to pull arbitrary files off Windows 11 machines, at least according to Microsoft press releases from a few years ago. Doing so required “managerial approval”, and was “only for debugging software faults”, but anyone vaguely familiar with the US CLOUD act knows that they’re legally required to provide the same access to law enforcement searches.
> For instance, Microsoft engineers have the ability to pull arbitrary files off Windows 11 machines, at least according to Microsoft press releases from a few years ago.
I seriously doubt it. Do you have a source for this?
Have you used other flavors? Tell me about Zorin? I'm on Ubuntu out of mostly momentum, but it's slow as hell. I've used Debian, Arch, and Mandrake in the past. I'm a web-dev so I need all my dev packaging etc to work predictably, a good terminal emulator, and ideally firefox.
I love slim systems so I'd really like to trim some fat.
Not going to defend particular implementations, but requirements? Those have definitely grown more than we give them credit.
That's the job of the GPU driver, mostly.
> 3D everything
That's the desktop compositor. Windows 7 already had one and ran on 1 GB of RAM.
> accessibility
Not everyone needs it, so it should be an optional installable component for those who do.
> fancy APIs for easier development allowing for more features
That still use win32 under the hood. Again, .net has existed for a very long time. MFC has existed for an even longer time.
> support for large amounts of devices
No one asked for Windows on touchscreen anything. Microsoft decided that themselves and ruined the UX for the remaining 99% of the users that still use a mouse and a keyboard.
> backwards compatibility
That's what Microsoft does historically, nothing new here.
> browsers are almost unrecognizable in featureset to the point they resemble an OS unto themselves
No one asked for this. My personal opinion is that everything app-like about browsers needs to be undone, yesterday, and they should again become the hypertext document viewers they were meant to be. Even JS is too much, but I guess it does have to stay.
I think you have to reason this one out. Your statement, to me, doesn’t hold water.
Let’s start with HDR. That requires the content that’s being rendered to have higher bit depth. Not all of this is stored in GPU memory at once, a lot is stored in system RAM and shuffled in and out.
Now take fluid animations. The interpolation of positions isn’t done solely on the GPU. It’s coordinated by the CPU. I don’t think this one necessarily adds ram usage but I think your comment is incorrect.
And lastly with resolutions, the GPU is only responsible for the processing and output. You still need high resolution data going in. This is easily observed by viewing any low resolution image. It will be heavily blurred or pixelated on a high resolution screen. That stands to reason that the OS needs to have high enough resolution assets to accommodate high resolution screens. Now these aren’t all stored on disc necessarily as high resolution graphics but they have to be stored in memory as such.
——
As to the rest of your points, they basically boil down to: I don’t want it so I don’t see why a default install should have it. Other people do want a highly feature full browser that can keep up with the modern web. And given that webviews are a huge part of application rendering today, the browser actively contributes to memory usage.
HDR can still fit in 32bit pixels. At 4k X 2k we have 8 megapixels or 32MB frame buffer. With triple buffering that's still under 100MB. Video games have been doing all sorts of animation for decades. It's not a lot of code and a modern CPU can actually composite a desktop in software pretty well. We use the GPU for speed, but that doesn't have to mean more memory.
The difference between 2000 and 2023 is the quantity of data to move and like I said, that about 100MB
Could we stop shuffling it out? Do more of the work there, directly?
The more work you do on the GPU, the more you need to shuffle because the more GPU memory you’d use AND the more state you’d need to check back on the CPU side, causing sync stalls. It’s not insurmountable, and macOS puts a lot more of its work on the GPU for example. Windows is a little more conservative in that regard.
Here are some more confounding factors:
- Every app needs one or more buffers to draw into. Especially with hidpi screens this can eat up memory quick. The compositor can juggle these to try and get some efficiency, but it can’t move all the state to the GPU due to latency.
- you also need to deal with swap memory. You’d ultimately need to shuffle date back to the system ram and then to disk and back which is fairly slow. It’s much better theoretically on APUs though.
Theoretically, APUs stand to solve a lot of these issues because they blur the lines of GPU and CPU memory.
It’s largely applicable mainly to games where resource access is known ahead of time.
> No one asked for this. My personal opinion is that everything app-like about browsers needs to be undone, yesterday, and they should again become the hypertext document viewers they were meant to be. Even JS is too much, but I guess it does have to stay.
People did ask for this, because it made them a lot of money.
You should recognize your opinion is a minority one outside of tech (and possibly, there too).
To wit, virtually no one is jumping to Gopher or Gemini.
What people want is a way to run amazon.com (and gmail and slack and so on), on any of their devices, securely, and without the fuss of installing anything.
Ideally the first-time use of amazon.com should involve nothing more than typing "amazon" and hitting enter. It should to show content almost instantly.
Satisfying that user need doesn't require a web browser. If OS vendors provided a way to do that today, we'd be using it. But they don't.
OS vendors still don't understand that. They assume people forever want to install software via a package manager. They assume software developers care about their platform's special features enough to bother learning Kotlin / Swift / GTK / C# / whatever. And they assume all software users run should be trusted with all of my local files.
Why is docker popular? Because it lets you type the name of some software. The software is downloaded from the internet. The software runs on linux/mac/windows. And it runs in a sandbox. Just like the web.
The web - for all its flaws - is still the only platform which delivers that experience to end users.
I'd throw out javascript and the DOM and all that rubbish in a heartbeat if we had any better option.
Guess what, both GMail and Slack have video calls. They use WebRTC. The browser has to support it. So the WebRTC code is a part of it.
> Ideally the first-time use of amazon.com should involve nothing more than typing "amazon" and hitting enter. It should to show content almost instantly.
And it does. Open an incognito tab, type amazon.com, it's pretty crazy how fast it loads, with all the images.
Yes; that's my point. That's the bar native apps need to reach to be competitive with the web.
Java tried exactly this, and it never took off in the desktop OS world. It wasn't significantly slimmer than browsers either, so it wouldn't have addressed any of your concerns.
Also, hyperlinking deep into and out of apps is still something that would be very very hard to achieve if the apps weren't web native - especially given the need to share data along with the links, but in a way that doesn't break security. I would predict that if you tried to recreate a platform with similar capabilities, you would end up reinventing 90% of web tech (though hopefully with a saner GUI model than the awfulness of HTML+CSS+JS).
I'm not proposing that. I didn't propose any solution to this in my comment. For what its worth, I agree with you - another java swing style approach would be a terrible idea. And I have an irrational hate for docker.
If I were in solution mode, what I think we need is all the browser features to be added to desktop operating systems. And those features being:
- Cross platform apps of some kind
- The app should be able to run "directly" from the internet in a lightweight way like web pages do. I shouldn't need to install apps to run them.
- Fierce browser tab style sandboxing.
If the goal was to compete with the browser, apps would need to use mostly platform-native controls like browsers do. WASM would be my tool of choice at this point, since then people can make apps in any language.
Unfortunately, executing this well would probably cost 7-10 figures. And it'd probably need buy in from Apple, Google, Microsoft and maybe GTK and KDE people. (Since we'd want linux, macos, ios, android and windows versions of the UI libraries). Ideally this would all get embedded in the respective operating systems so users don't have to install anything special, otherwise the core appeal would be gone.
Who knows if it'll ever happen, or if we'll just be stuck with the web forever. But a man can dream.
You'll then have to convince Microsoft, Apple, Google, IBM RedHat, Canonical, the Debian project, and a few others, to actually package this VM with their OSs, so that users don't have to manually choose to install it.
Then, you need to come up with some system of integrating this with, at a minimum, password managers, SAML and OAuth2, or you'll have something far less usable and secure than an equivalent web app. You'll probably have to integrate it with many more web technologies in fact, as people will eventually want to be able to show some web pages or web-formatted emails inside their apps.
So, my prediction is that any such effort will end-up reimplementing the browser, with little to no advantages when all is said and done.
Personally, I hate developing any web-like app. The GUI stack in particular is atrocious, with virtually no usable built-in controls, leading to a proliferation of toolkits and frameworks that do half the job and can't talk to each other. I'm hopeful that WASM will eventually allow more mature GUI frameworks to be used in web apps in a cross-platform manner, and we can forget about using a document markup language for designing application UIs. But otherwise, I think the web model is here to stay, and has in fact proven to be the most successful app ecosystem ever tried, by far (especially when counting the numerous iOS and Android apps that are entirely web views).
I think this is the easy part. Everyone is already on board with webassembly. The hard part would be coming up with a common api which paves over all the platform idiosyncrasies in a way that feels good and native everywhere, and that developers actually want to use.
I trust you are aware Microsoft did exactly that, and the entire tech world exploded in annger, and the US Government took Microsoft to court to make them undo it on the grounds that integrating browser technology into the OS was a monopolistic activity[0].
[0]https://en.m.wikipedia.org/wiki/United_States_v._Microsoft_C....
We could have lived in an alternative universe where we succeeded to teach people the basics of how to use the computer as a powerful tool for themselves.
Instead, corporations rushed to make most of the things super easy to make billions on the way.
I’d even say that this wasn’t really a problem until they realized that closed computers allowed them more control and more money.
So yeah, now we are stuck with web apps on closed systems and most people are happy with it, that’s true.
And, as the time passes, we are loosing the universal access to "the computer". Instead of a great tool for enabling power to the people, it’s being transformed to a prison to control what the people can do, see and even think.
ps : When I say "computer" I include PC, phones, tablets, voice assistants … everything with a processor running arbitrary programs.
I agree that JS is not a gold standard. Still it works most of the time and with typescript stapled on top it is acceptable.
Time has proven again and again (not only in tech) that the simple solutions will prevail. Want to change it? Build a simpler and better solution. I don't like that too but that's human nature at work.
Maybe, instead of shutting those opinions down, you should reflect on how you, in whatever capacity you serve our awful tech overlords, can work to make these voices more heard and included in software/feature design
Fwiw: https://news.ycombinator.com/item?id=34226798
The UI has to be designed from the ground up to support accessibility.
Also, disability may not be permanent. I recently underwent major surgery and for at least a few days afterwards using my cell phone was nearly impossible. I resorted voice control a few times because I did not have the coordination or cognitive function to type. (Aside: cell phones in general are accessibility dumpster fires, but it took a major life event to demonstrate to me how bad it really is.)
So no, accessibility is not just a toggle switch or installable library. In fact, I hope future UI design incorporates some kind of non-intrusive learning and adaptability, such that when the system detects the user continually making certain kinds of errors, the UI will adapt to help.
Of course. Navigating around the install process without accessibility already enabled is going to be a non-starter for many.
As for why all the bloat? I speculate it's because accessibility features are a second-class citizen at best, and when it comes to optimizing and streamlining, all the effort in development goes into the most-used features, whether or not they are the most essential.
Accessibility includes interaction design, zoom ability, audio commands, action link ups, alternate rendering modes, alternate motion modes, hooks for assistive devices to interact with the system. It goes far deeper into the system than just labels for a screen reader.
If you stopped to just think about the vast number of disabilities out there, you’d realize how untrue your statement is.
Then starting with Windows 8, they removed a lot of those features. 11 is even worse.
Again, I don't see how the things you specified can't be built into existing win32 APIs and why anything needs to be designed from the ground up to support them.
Accessibility is also not something that is just a binary. You may be slightly short sighted and need larger text, you might need an OS specified colour palette that overrides the apps rendering. There’s just so many levels of nuance here. It’s not just “apps can configure a palette”, it’s that they need to work across the system
If you have the time, I really suggest watching the Apple developer videos on accessibility to see why it’s not just as simple as you put it. Microsoft do a lot of great work for accessibility too , they just don’t have much content up to delve into it.
As to why it has to be developed from the ground up, it doesn’t, but it needs to be at the foundation regardless. Apple for example didn’t redo their UI for accessibility, however Microsoft take a more “we won’t touch existing stuff in case we break it” approach to their core libs.
Also , again, I’d point out that you’re purposefully trying to trivialize something you don’t use.
There is a system-provided color palette. I don't know where this UI is in modern Windows, but in versions where you could enable the "classic" theme, you could still configure these colors. They are, of course, exposed to apps, and apps are expected to use them to draw their controls. That, as well as theme elements since XP.
> Microsoft take a more “we won’t touch existing stuff in case we break it” approach to their core libs.
Making sure you don't break existing functionality is called regression testing. I'm sure Microsoft already does a lot of it for each release.
And actually it's not quite that. The transition from 9x to NT involved swapping an entire kernel from underneath apps. Most apps didn't notice it. In fact, the backwards compatibility is maintained so well that I can run apps from the 90s — built for, and only tested on, the old DOS-based Windows versions — on my modern ARM Mac, in a VM, through an x86 -> ARM translation layer.
People with motion sickness (reduced animation), the deaf (captions!), and the colorblind would beg to differ
I wonder where the current status quo lies in regards to both desktop computing and web applications/sites. Which OSes and which GUI frameworks for those are the best or worst, how do they compare? How have they evolved over time? Which web frameworks/libraries give one the best starting point to iterate upon, say, component libraries and how well they integrate with something like React/Angular/Vue?
Sadly I'm not knowledgeable enough at the moment to answer all of those in detail myself, but there are at least some tools for web development.
And yet, while we talk about accessibility occasionally, we don't talk about how good of a starting point certain component frameworks (e.g. Bootstrap vs PrimeFaces/PrimeNG/PrimeVue, Ant Design, ...) provide us with, or how easy it is to setup build toolchains for automated testing and reporting of warnings.As for OS related things, I guess seeing how well Qt, GTK and other solutions support the OS functionality and what that functionality even is is probably a whole topic in of itself.
It worked for me, it found lots of color contrast problems (white-on-light purple has low contrast). https://wave.webaim.org/report#/https://kronis.dev/
WAVE is also available as a browser extension.
Accessibility checkers can be helpful, particularly for catching basic errors before they ship. The large majority of accessibility problems a site can have cannot be identified by software, humans need to find them.
Current Bootstrap is not bad if you read and follow all of their advice. I'm not claiming there are no problems lurking amongst their offerings.
If you search for "name-of-thing accessibility" and don't find extensive details about accessibility in the thing's own documentation, it probably does a poor job. A framework can't prevent developers from making mistakes.
Bold statement. I used to work in exactly that area and the reality is humans often simply don't bother finding many of the accessibility issues that automated tools can and do find. Even if such a tool isn't able to accurately pinpoint every possible issue, and inevitably gives a number of false positives (the classic being expecting everything to have ALT text, even when images are essentially decorative and don't provide information to the user), the use of it at least provides a starting point for humans to be able to realistically find the most serious issues and ensure they're addressed.
However I would never claim that good accessibility support requires significantly more (e.g. >2x) resources, and certainly not at the OS level. In fact, you typically get better accessibility if you use the built-in OS (or browser) provided controls, which are less resource intensive than the fancy custom ones app seems to like using these days (even MS's own apps are heavy on custom-controls for everything).
For example, the classic I would say is not whether an image needs an alt attribute or not but whether an image's alt attribute value is a meaningful equivalent to the image in the context where it appears.
I'm not sure what kind of "resources" you're referring to. If you mean computing resources (CPU, RAM, etc.) standard, contemporary computers do seem to have enough for current assistive technologies, one doesn't need to buy a higher end computer to run them. If you mean OS resources for supplying assistive technologies and accessibility APIs, mainstream OS's are decent but specifically for screen readers there's a lot of room for improvement.
Hands down macOS/iOS are the leaders here with Cocoa/SwiftUI/UIKit etc (ultimately basically the same). The OS also has many hooks to allow third party frameworks to tie in to the accessibility.
Windows is second in my opinion. Microsoft does some good work here but it’s not as extensive in terms of integrations and pervasiveness due to how varied their ecosystem is now. They do however do excellent work on the gaming side with their accessibility controllers.
In terms of UI frameworks, Qt is decent but not great . Electron actually does well here because it can piggy back off the work done for web browsers. Stuff like Imgui etc rank at the bottom because they don’t expose the tree to the OS in a meaningful way.
I can’t speak to web frameworks. In theory it shouldn’t matter as long as the components are good. Many node frameworks try and install a11y as a package to encourage better accessibility.
Voice control in particular is really handy with the number and grid overlays for providing commands.
There's plenty one can do in macOS and its native applications with a keyboard by default, those that need more can enable "Use keyboard navigation to move focus between controls." Those that need even more enable Full Keyboard Access. These settings aren't on be default because Apple has decided they'd just get in the way and/or confuse people who use the keyboard but rely on it less.
In Safari specifically, by default pressing Tab doesn't focus links as it does in every other browser because most people use a cursor to activate links, not the keyboard. There also tend to be a lot more links than what Tab does focus, form inputs.
Macs try to have just enough accessibility features enabled by default that anyone who needs more can get to the setting to turn it on. Something I just learned Macs have that other OS/hardware doesn't is audible feedback for the blind to login when a Mac is turned on while full disk encryption is enabled.
I'm not claiming Apple gets everything right or that their approach is the best, I'm just trying to describe the basics of what's there and the outlook driving the choices.
Gray on gray, Teams. Accessible like a hammer: everything looks like nails.
Dark/light mode is accessibility.
Reduced animations/not animating tiles, etc. is accessibility.
Being able to scale/zoom in on fonts and images is accessibility.
Ensuring your automated GUI tests can interrogate the application/page's structure and state is accessibility.
Not reloading the entire page to render search results that would lose searh filter selection and/or current keyboard focus is accessibility.
etc.
I understand that bigger stuff and better graphics involve more RAM and the switch to 64 bit doubled the pointer sizes (which is why you can't meaningfully run Windows 7 x64 on 1GB of RAM like you can the 32 bit version) but with 4GB of system RAM you should be able to fit everything in and then some.
You actually can, as various Linux distributions demonstrate. The algorithms and APIs aren't as well developed, but better window control/accessibility APIs don't take up more than a megabyte of RAM.
People do ask for many Microsoft features, such as the appification of the interface and the Microsoft store. Just because you didn't ask for it, doesn't mean it's not necessary. However, Microsoft has known for years how to build and implement those requests in a much more compact environment.
My take is still the same old cynical one: as resources become cheaper, developers become lazier. I don't want to go back to the days of racing the beam with carefully planned instructions but the moment Electron gained any popularity the ecosystem went too far. "Yes but our customers want features more than a small footprint" is the common excuses I hear, but that's ignoring all the people calling various support channels or just being miserable with their terribly slow machine.
At most places I've worked it's a struggle to get time allocated towards necessary refactoring that'll ensure new features can be delivered in a timely fashion.
I'd love to spend time making the product more efficient but unless I can demonstrate immediate and tangible business value in doing so, it's never going to be approved over working on new features.
I have several devices, including a couple Linux PC's, an M1 macbook air, and a Microsoft Surface Go. If Windows 11 didn't support touchscreens, I would have gone with an iPad. However, Windows 11 is the _best_ touchscreen OS to-date.
Unlike iOS or iPadOS, Windows 11 runs desktop apps and combines the convenience of touchscreen scrolling/interaction with the desktop experience. Windows 11 does this very, very well.
I don't use Windows anymore but I remember thinking "this is exactly what I've always wanted from a convertible/touch-support-in-desktop OS"...
Compiz ran fine with a 128MB GPU and 512MB of RAM.
Now that I think about it, Mac OSX was doing GPU compositing back in 2000/2001 and those machine usually only had about 16MB of VRAM. I remember it running fairly well on a 2005 MacMini G4 with 32MB of VRAM.
You didn't ask. It is, as you say, your personal opinion.
From my POV, current Web is fine and the fact that browsers are powerful liberated us from writing specialized desktop apps for various OSes. I am much happier writing a Web UI than hacking together Win32 or Qt-based apps. Or, God forbid, AVKON Symbian OS UI. That was its own circle of hell.
I use macOS and I very much dislike anything built with cross-platform GUI toolkits, and especially the web stack. And it's always painfully obvious when something is not native. It doesn't behave like the rest of the system. It's not mac-like. It draws its own buttons from scratch and does its own event handling on them instead of using NSButton. I don't want that kind of "liberation". I want proper, native, consistent apps. Most other people probably do too, they just don't realize that or can't put it into words.
The only counter-example out there known to me is IntelliJ-based IDEs. They're built with Swing, but they do somehow feel native enough.
Also, developer experience is not a something users care about. And I'm saying that as a developer myself. Do use fancy tools to make your job easier, sure, but avoid those of them that stay inside your product when you ship it.
Users might not care about developer experience, but everything is a trade off: developer time is a cost, the cost of producing software is an input into how much it needs to cost. Users seem to want features delivered quickly, without much regard to implementation quality.
macOS (and iOS) have incredibly good screen reader support, as well as all of the things you're complaining about in your original comment at the top of this thread. Clearly those things are absolutely gobbling memory, and yet you don't seem to connect the dots that they're directly contributing to high memory requirements of macOS?
I mean, 8GB on stock machines today is barely manageable. You can't buy a Mac with less than 8GB today; you can't even buy a phone with 2GB or less. I'm not sure you're in an position to rail against high-memory bloat in computing today.
p.s. I say this as someone who uses macOS as their daily driver and has for a very long time
Nobody is a hypocrite for buying X gigabytes of ram but also wanting the naked operating system to use a much smaller amount, or wanting single programs to use a much smaller amount.
> macOS (and iOS) have incredibly good screen reader support, as well as all of the things you're complaining about in your original comment at the top of this thread. Clearly those things are absolutely gobbling memory, and yet you don't seem to connect the dots that they're directly contributing to high memory requirements of macOS?
What makes a screen reader gobble memory?
And it definitely shouldn't gobble memory when it's not running.
Running several instances of Chromium though... You'll probably run one anyway at all times as your actual web browser, but additional ones in the form of "oh so easy to build" Electron apps don't help. In Apple's eyes, though, you should absolutely ignore other browsers and use Safari exclusively. It might not be as much of a memory hog as Chrome — I haven't researched this, this is simply my guesses.
I also heard that M1 Macs are better at memory management compared to Intel. Again, I don't have any concrete evidence to back this up, but knowing Apple, it's believable.
But I understand, that most of my complains are complains of power user with 25+ years of experience and muscle memory, and I'm not target auditory for almost any new app. You win :-(
a) nice looking, but less capable apps,
b) more expensive apps, or, apps that have to be paid even if they could be free in an alternate universe,
c) limited availability - app X only exists for Windows and not Mac, because either a Mac programmer isn't available or would be too expensive.
Developing for multiple UIs at once is both prone to errors and more expensive, you wind up paying for extra developers, extra testers/QA, extra hardware and possibly extra IDEs and various fees. Such extra cost may be negligible for Google, but is absolutely a factor for small software houses outside the richest countries, much more so for "one person shows" and various underfunded OSS projects.
I remember the hell that was Nokia Series 60 and 90 programming. Nokia churned out a deluge of devices that theoretically shared the same OS, but they had so many device-specific quirks and oddities on the UI level that you spent most of the time fighting with (bad) emulators of devices you could not afford to buy. This is the other extreme and I am happy that it seems to be gone forever.
OSS projects are completely different story, of course, no questions to OSS developers.
I prefer to pay $200 for native application than $100 for Electron one.
Oh, whom do I try to fool? Of course, it will be Electron app with $9.95/month subscription now :-(
As I said in my previous comment, this is quite expensive, and people inside Silicon Valley rarely understand how cash-strapped the software sector in the rest of the world is. In Czech, we have a saying "a person who is fed won't believe a hungry one" and SV veterans that are used to reams of VC cash supporting even lossy businesses like Uber have no idea that the excess spending needed to hire another developer for several months somewhere in Warsaw or Bucharest may kill a fledgling or small company.
In this, the unity of the Web is a life-saver.
But, again, I'm prefer to make one thing good than two things good enough.
There is a small resurgence of the gopher protocol that I believe is rooted in this sentiment.
But this of course is in the metrics of how you measure. Windows 3.1 for example was a huge crashing piece of crap that was locking up all the damned time. MacOS at the time wasn't that much better. Now I can leave windows up for a month at a time between security reboots. Specialized Windows and Linux machines in server environments on a reduced patching schedule will stay up far longer, but generally security updates are what limits the uptime.
I remember running Windows applications and receiving buffer overflow errors back then. If you got a buffer overflow message today you'd think that either your hardware is going bad or someone wrote a terrible security flaw into your application. And back there were security flaws everywhere. 'Smashing the stack for fun and profit' wasn't wrote till '95, well after consumers had started getting on the internet in mass. And if you were using applications like Word or Excel you could expect to measure 'crashes' per week rather than the crashes per month, many of which are completely recoverable in applications like office.
This needs application support, by this broad definition all operating systems "saves state and comes back right where you started on a security update reboot".
Accessibility has actually gone down with the switch to web applications. Microsoft had an excellent accessibility framework with subpar but usable tooling built in, and excellent commercial applications to make use of the existing API, all the way back in Windows XP. Backwards compatibility hacks such as loading old memory manager behaviour and allocating extra buffer space for known buggy applications may take more RAM but don't increase any requirements.
Inagree that requirements have grown but not by the amount reflected in standby CPU and memory use. Don't forget that we've also gained near universal SSD availability, negating the need for RAM caches in many circumstances. And that's just ignoring the advance in CPU and GPU performance since the Windows XP days, when DOS was finally killed off and the amount of necessary custom tailored assembly drastically dropped.
When I boot a Windows XP machine, the only thing I can say I'm really missing as a user is application support. Alright, the Windows XP kernel was incredibly insecure, so let's upgrade to Windows 7 where the painful Vista driver days are behind us and the kernel has been reshaped to put a huge amount of vulnerable code in userspace. What am I missing now? Touchscreen and pen support works, 4k resolutions and higher are supported perfectly fine, almost all modern games still run.
The Steam hardware survey says it all. The largest target audience using their computer components the most runs one or two 1080p monitors, has 6 CPU cores and about 8GB of RAM. Your average consumer doesn't need or use all of that. HiDPI and HDR are a niche and designing your OS around a niche is stupid.
SSDs won't replace RAM but many RAM caches aren't performance critical; sometimes you need your code to be reasonably fast on a laptop with a 5400 rpm hard drive and then you have very little choice of data structures. With the random access patterns SSDs allow this complication quickly disappears. You won't find many Android apps that will cache 8MB block reads to compensate for a spinning hard drive, for example.
Windows didn't really see a lot of actual progress in this area since the Win2k days. Lots of activity and churn yes, but little actual progress.
May I remind of https://www.enlightenment.org/
20 years ago, there were "live cds" that could do most of what you mention, at maybe 512 MB ram.
It definitely was pretty for the day, though.
I mean we have larger resolution support amd scaling for hidpi, better/faster indexation, better touchpad support. Can you name anything else? Localization hasn't progressed that much, I remember already being able to select some barely spoken dialects on linux 20y ago?
It was also rendering Display PostScript on a 25Mhz '040. One of the first machines in its day that allowed you to drag full windows, rather than frames on the desktop. High tech in action!
Soo the feature windows 7 had? I remember running 3D desktop with compositor and fancy effects on 1GB RAM laptop on Linux...
RAM requirements for Windows as OS are ridiculus.
and to be honest, nowadays the biggest issue is the web browser and the sheer amount of memory and processing that modern websites use.
it's unbelievable.
Browsers are still going to be the sticking point, but with agressive adblockers/noscript and hardware that's not terribly old (NVMe storage is priority 1), and you should be set.
But of course, snappiness isn't free and you have to spend some time doing first time set-ups and maintenance.
The problem is the web browser.
I’ve got 16 gb of ram and the browser is using most of them. I can literally see the swap space emptying when i have (as in “im forced to”) sacrifice my browsing session (xkill the browser) due to constant swap out to disk.
And I’m using a pci gen 3 nvme disk, and already lowered swappiness.
The problem is the web browser.
At this point, my primary use case for ad blocking isn't the ad blocking itself, it is 1. the security of blocking ads, one of the worst vectors for attacks in the while and 2. the greatly reduced system resources my browser uses. The ad blocking itself is a further bonus.
If even there you have no success, I'd suggest you try something like EndeavorOS. Browsers have issues but that is not normal. You're not using Debian stable on the desktop, right?
I know that it installs various libraries. I do not know why those libraries are dozens of megabytes each.
that said 2GB is acceptable considering the state of everything
not saying i wouldn't like to have QNX class back
I agree and I find the apologists to be completely wrong. I run a modern system: 38" screen, 2 Gbit/s fiber to the home. I'm not "stuck in the past" with a 17" screen or something.
The thing flies. It's screaming fast as it should be.
But I run a lean Debian Linux system, with a minimal window manager. It's definitely less bloated than Ubuntu and compared to Windows, well: there's no comparison possible.
Every single keystroke has an effect instantly. After reading the article about keyboard latency, I found out my keyboard was one of the lower latency one (HHKB) and yet I finetuned the Linux kernel for USB 2.0 polling of keyboard inputs to be even faster. ATM I cannot run a real-time kernel because NVidia refuses to modify a non-stock kernel (well that's what the driver says at least) but even without that: everything feels and actually is insanely fast.
I've got a dozen virtual workspace / virtual desktops and there are shortcuts assigned to each of them. I can fill every virtual virtual desktop with apps and windows and then switch like a madman on my keyboard between each of them: the system doesn't break a sweat.
I can display all the pictures on my NVME SSD in full screen and leave my finger on the arrow key and they'll move so quickly I can't follow.
Computers became very fast and monitor size / file sizes for a regular usage simply didn't grow anywhere near close as quickly as CPU performances.
Windows is a pig.
It doesn't look the same for everyone, of course. It's not about some universalizable value like minimalism. But this is a great example of one of the dimensions in which a Linux desktop can just feel really great in an almost physical way.
- most motherboards reduce the DDR clock when using > 2 sticks.
- higher capacity RAM sticks use more “ranks” (AKA “banks”), which increases latency.
as of 2 years ago, 2x single rank DDR would limit you to 64GB. but 2 years is a long time in computerland: 64GB single rank sticks sound plausible.
It's not particularly interesting or pretty, but it works well and does most if not everything that you might need, so is my choice for a daily driver. Here's the debian Wiki page on it: https://wiki.debian.org/Xfce
Apart from that, some folks also like Cinnamon, MATE, GNOME or even KDE. I think the best option is to play around in Live CDs with them and see which feel the best for your individual needs and taste. Do note that Ubuntu as a base distro might give you fewer hassles in regards to proprietary drivers, if you don't care about using only free software much.
I was already leaning towards XFCE so i will give that a try.
Also i did some reading on the proprietary drivers (nvidia, etc.) I'm going to install dual boot Debian/XFCE and Pop!_OS for the gaming.
I still can't believe that Windows has turned into such a bloatware/mess that i'm actually at a point i can't live with it anymore...
That is quite unfortunate, especially because there is some software that I think Windows does better - like MobaXTerm or 7-Zip (with its GUI), FancyZones (for window snapping) and most of the GPU control panels.
That said, as that article of mine shows, Linux on the desktop is actually way better than it used to be years ago and gaming is definitely viable, even if not all of the titles are supported. Sadly, I don't think that'll happen anytime soon, but it's still better than nothing!
I'll still probably go the dual boot route with Windows and Linux, or maybe will have a VM with GPU passthrough for specific games on Linux, although I haven't gotten it working just right, ever. Oh well, here's to a brighter future!
FreeBSD can be comfortably used on systems with 64 MiB of RAM for solving simple tasks like a small proxy server. It has always been good at this — back in the day cheap VPS often used it (and not Linux) precisely because of its small memory requirements.
There are smaller window managers but I choose this one as an example as it gives a similar experience to the windows xp of olds.
I have done the experience on slimming as much as possible a desktop. But once you start a web browser with more than 3 tabs memory usage goes through the roof. In the end if you want to run an old system with 512mb of ram you are kind of forced to use the web sans javascript and images. You are almost better off using links or w3m and tui apps for everything. Netsurf can work too if you are limiting the number of tabs open.
One a 1GB system you can definitely use a modern web browser but you definitely need the ad/trackers removal extensions and have to take good care of not opening more than 2-3 tabs or you will start swapping a lot.
I've taught high performance data structures to dev teams. I've tried to explain how a complex problem can sometimes be solved with a simple algorithm. I've spent decades on attempting to show coworkers that applying a little comp-sci can have a profound effect in the end.
But no. Unfortunately, it always fails. The mindset is always "making it work" and problem solving is brute-forcing the problem until it works.
It takes a special kind of mindset to keep systems efficient. It is like painting a picture, but most seem to prefer doing it with a paint roller.
I'm all for dedicating time and effort towards producing performant code, but it does come at a cost - in some cases, a cost of maintainability (for an extreme example there's always https://users.cs.utah.edu/~elb/folklore/mel.html). In fact I'd suggest in general if you design a library of functions where obviousness/clarity/ease-of-use are your primary criteria, performance is likely to suffer. And there are undoubtedly cases where the cost of higher-grade hardware (in terms of speed and storage capacity) is vastly lower than that of more efficient software. I'd also say performance tuning quite often involves significant trade-offs that lead to much higher memory usage - caching may well be the only way to achieve significant gains at certain scales, but then as you scale up even further, the memory requirements of the caching start to become an issue in themselves. If there were a simple solution it would have been found by now.
Let's say I build a sorting algorithm that is O(N^2) complexity and works fine for small inputs (takes <1 millisecond), but it is going to be used for large data systems. Suddenly it takes hundreds of thousands of hours to sort the data.
One of the corps I worked with went full scalability in their architecture. One-click deployments, dynamic scaling of servers, rebalancing of databases, automatic provisioning of storage. They were handling 40-50k requests pr. second with their 15-ish large server farm, which could sale down to 5 servers, or up to 50-ish before it began to wobble.
I got called in because the company had gotten a large client that needed 100k requests pr. second. They tried scaling the system to fit the need, but the whole thing got unstable and their solution was "more operations people to manage it".
I built a custom solution for the backend. Took about two months. The new system could do about 2100k requests pr. second on one server. Scalability of the new system was ~90% efficient as well, so lots of capacity for the future.
None of their developers understood computers or the science behind them. They were all educated and experienced developers, but none of that were applied to the problem. They were just assembling parts from the hardware store until something worked, and the resulting Frankenstein's Monster was put into production.
No, I'm talking about handling requests. In this particular case, requests (32 to 64 bytes) were flowing through several services (on the same computer). I replaced the processing chain with a single application to remove the overhead of serialization between processes. Requests were filtered early in the pipeline, which made a ~55% reduction in the work needed.
Requests were then batched into succinct data structures and processed via SIMD. Output used to be JSON, but I instead wrote a custom memory allocator and just memcpy the entire blob on to the wire.
Before: No pre-filtering, off-the-shelf databases (PSQL), queue system for I/O, named pipes and TCP/IP for local data transfer. Lots of concurrency issues, thread starvation and I/O bound work.
After: Agnessive pre-filtering, succinct data structures for cache coherence, no serialization overhead, SIMD processing. Can saturate a 32 core CPU with almost no overhead.
[1] https://www.techempower.com/benchmarks/#section=data-r13&hw=...
How about all those sandboxes, protections and mitigations?
Nowadays people care about security waaay more than people did 20-30 years ago.
But this isn't just Windows, currently I am on Kubuntu 22.04 and it is using about 1.5GB to get to the Desktop! Yes it is very smooth and flash but it seems like a bit much to do this.
This is why I am interesting in projects like Haiku and Serenity OS, they may bring some sanity back into these things.
I guess that with careful selection of GUI components one can fit empty desktop to 60 MB.
Until you start browser anyway.
But good to know that we can still 'Hyper-mile' our OS.
https://en.m.wikipedia.org/wiki/GEOS_(8-bit_operating_system...
Obviously there were huge limitations but it shows what can be done. This fit on one 170K floppy and ran on a 1.44mhz 8 bit machine with 64K of RAM.
In the 1990s I ran both Linux and Windows on less than 64M of RAM with IDEs, web browsers, games, and more.
If I had to guess what were possible today I’d fall back on the fairly reliable 80/20 rule and posit that 20% of todays bloat is intrinsic to increases in capability and 80% is incidental complexity and waste.
Yet, I could do
* word processing
* desktop publishing
* working with scanned documents
* spreadsheets
* graphics
* digital painting
* music production
* gaming (even chess)
* programming (besides BASIC and ASM I had a Pascal compiler)
* CAD and 3D design (Giga CAD [1], fascinated me to no end)
* Video creation [2]
For all this tasks there were standalone applications [3] with their own GUI [4]. GEOS was an integrated GUI environment with its own applications and way ahead of its times [5].
It still blows my mind how all this could work.
My first Linux ran on a 386DX with 4M of RAM, but this probably as low as on can get. Even the installer choked on that little RAM and one had to create a swap partition and swapon manually after booting but before the installer ran. In text mode it was pretty usable though, X11 worked and I remember having GNU chess runnning, but it was quite slow.
[1] https://youtu.be/ZEf9XMrc5u8
[2] OK, this one is a bit of a stretch but there actually was Videofox for creating video titles and shopping window animations: https://www.pagetable.com/docs/scanntronik_manuals/videofox....
[3] Some came on extension modules which saved RAM or brought a bit of extra RAM, but we are still talking kilobytes. For examples see https://www.pagetable.com/?p=1730
[4] Or sort of TUI if you like; the strict separation of text and graphics mode wasn't a thing in the home computer era.
[5] The standalone apps were still better. So, as advanced GEOS was, I believe it was not used productively much.
But if you had to use that software now, you'd say (justly) that it's extremely basic and limited, and that interoperability with other systems is not great.
What can be done ≠ what's comfortable to use.
For me it's more about the excitement that the bright future lay ahead of us so clearly mixed with a slight disappointment that I sometimes feel we could have made more out of it.
[0] https://github.com/smallstepforman/Medo
Zawinski’s Law - every program on windows attempts to expand until it can be your default PDF viewer. [cloud file sync, advertising display board, telemetry hoover, App Store…]
When we see egregious examples like Windows, then it's arguable having constraints might be desirable. It is well-known that "limitation breeds creativity". It's certainly true outside of "tech" companies. I have witnessed it first hand. "Tech" companies are some sort of weird fantasy world where stupidity disguised as cleverness is allowed to run rampant. No more likely place for this to happen than at companies that have too much money.
edit: This remind me a some rants from Casey Muratori about VS[0] and windows terminal[1]
[0] https://youtu.be/GC-0tCy4P1U
[1] https://youtu.be/hxM8QmyZXtg
Linux with a lightweight GUI for example can still run okay with just 128MB. I ran Debian with LXDE on an old IBM T22, and it worked perfectly well. Running Firefox was a problem (but did eventually work), but something more stripped down like NetSurf or Dillo is blazingly fast.
[1] https://www.seamonkey-project.org/
[2] https://noscript.net/getit/ [scroll down to bottom, note there is no support]
https://github.com/gorhill/uBlock-for-firefox-legacy
Also, there's a forked dillo called dillo-ng
https://github.com/w00fpack/dilloNG
with mpv support on context menu that works fine with these sites:
https://68k.news
https://simple-web.org/projects/simplytranslate.html
https://simple-web.org/ (several services without js). On the Invidious video links, simply right click the link and choose open with mpv".
Also, on Gopher (lynx it's good for this) gopher://magical.fish has a huge service list usable on text mode.
Software efficiency is a serious equity and environmental issue, and I wish more people would see it that way.
These was (is? - Not sure) a version of Firefox for PowerPC MaxOSX - TenFour Browser - that brought forward modern features/support of Firefox to Macs that were long past their prime. They mentioned that their favorite story in time of development was "One of my favourite reports was from a missionary in Myanmar using a beat-up G4 mini over a dialup modem; I hope he is safe during the present unrest. "
http://tenfourfox.blogspot.com/2020/04/the-end-of-tenfourfox...
This is what can happen when things are optimized for the people, not the business. This is part of why I still use a Core 2 Duo as my daily runner, if it ain't broke don't fix it.
But isn't the primary application for these machines going to be the web browser, which is pulling in so much JS insanity that the web sites won't render well anyway?
Companies will invest in what pays the bills. And hyper optimising for customers with no money isn’t it.
Nobody has any actual clue what they're doing, everyone keeps writing code for the compiler hoping for the best and the rest of the world has to buy new machines because the programmers of the last decades sucked.
That, btw, includes most of you people reading this. You're fucking welcome.
A top-of-the-line laptop CPU from 20 years ago likely just doesn't support addressing more than 4GB or RAM. Forcing it to work on modern resource-heavy Web pages and media is like forcing a GPU from 20 years a go to run Skyrim. It's just not adequate.
• Read the news
• Post on social media
• Make video calls
• Use instant messaging
• Create and edit word documents/presentations/spreadsheets
Today I use my computer for all of those same things... and yet they all require drastically more memory (and CPU, GPU, etc). What happened, and how does this benefit consumers? Yeah, modern web pages are resource-heavy—but to what end†?
In some cases, the requirements really did change. For example, I can now watch videos in 4K; my 2008 computer could handle 1080p, but I imagine it wouldn't have handled 4K as well. However, I suspect many users of old machines would be perfectly happy to drop down to a lower resolution.
---
† Something I find amusing in all this... people often say they're glad Flash applets died because they were slow. Nowadays, instead of Flash, we use browser apps written in Javascript. I wonder how "slow" those apps would run if you threw them on a computer from the Flash era. (This isn't to discount other problems with Flash, although I do think it has a worse reputation than it deserves.)
I think that Apple just recently stopped to sell 4 GB computers. And their phones from the last year sells with 4 GB RAM while being perfectly able to do all the things you've mentioned as well.
I used to have a 2016 dual core macbook pro with integrated graphics and 8gb of RAM or something. The machine was great when I got it, but 18 months ago it was limping along and I finally decided to get rid of it.
And it wasn't any 3rd party apps that killed the machine. Every time the machine started up, iphotoanalysisd or some random spotlight service or something would be eating all my CPU. It was always a 1st party Apple app which was making it slow. And the graphics felt laggy. Just moving windows around felt bad a lot of the time, even when I didn't have anything open. Xcode would sometimes lag the machine so much that it would drop keystrokes while I was typing. I had RAM to spare - it was a CPU problem.
In the process of wiping the machine, I booted into Recovery mode and it booted the 2016 recovery image of macos. Holy smokes - the graphics were all wicked fast again! I spent a couple minutes just moving windows around the screen in recovery mode marvelling at how fast it felt.
I wonder if reverting to an old version of macos would have fixed my problems. As far as I can tell, this was all Apple's fault. They piled up macos with so much crap that their own computers couldn't cope with the weight. I also wonder if they broke the intel graphics drivers in some point release somewhere along the way, or they started relying on GPU features that Intel's driver only had software emulation for.
Modern macos still has all that crap - the efficiency cores in my M1 laptop are constantly spinning up for some ridiculous Apple service or something. But at least now that still leaves me with 8 P-cores for my actual work. Its ridiculous.
I bet linux would have worked great on that old laptop. I wish I tried it before turfing the machine.
Compare the memory usage of:
• 2008-era Skype and iChat vs Slack, Teams, and Discord.
• 2008-era web pages (including with Flash embeds) vs modern web pages.
• Microsoft Office 2007 vs current Microsoft 365.
And it's not only or even primarily memory, but also CPU requirements and so on.
Windows 10 (and I assume 11) has an option to "refresh" Windows in Settings.
That has never been a reasonable expectation in the history of computing.
> That has never been a reasonable expectation in the history of computing.
Yes, but again, why? As I see it, everyone has been conditioned to this lie that computers naturally slow down over time, because that's the way it has always been relative to the speed of current software. Originally, that was for a good reason—I'm glad programs now use full-color GUIs. But now?
What would actually happen if Moore's law ended tomorrow, and we were no longer able to make computers faster than they are today? I suspect that a (slim) majority of computer users would actually benefit. Not hardcore gamers, not scientists, and certainly not software developers--some people really do need as much performance as they can get. But for the people who just need to message friends, write documents, check email, etc., the experience would be unchanged—except that their current computers would never slow down!
Years ago while I was at a startup, I accidentally left my laptop at work on a Friday. I wanted to write some code over the weekend. Well, I had a raspberry pi kicking around, so I fired up nodejs on that and took our project for a spin. But the program took ages to start up. I hadn't noticed the ~200ms startup time on my "real" computer, but on a r.pi that translated to over 1 second of startup time! So annoying! I ended up spending a whole morning profiling and debugging to figure out why it was so slow. Turns out we were pulling in some huge libraries and only using a fraction of the code inside. Trimming that down made the startup time ~5x faster. When I got into the office on monday, I pulled in my changes and felt the speed immediately. But I never would have fixed that if I hadn't spent that weekend developing on the raspberry pi.
Since then I've been wondering there's a way to do this systematically. Have "slow CPU tuesdays" or something, where everyone in the office turns off most of our CPU cores out of solidarity with our users. But I'm not holding my breath.
Recently it's become less possible to run the same software for 10+ years because so many things are subscription only and have unnecessary networking, which makes it necessary to patch security flaws, and then you have to accept whatever downgrade the vendor forces on you.
Older applications that you used to be able to just install run just as well as they did the day they came out on the hardware available at the time. The idea that computers "get worse" is entirely a phenomenon of the industry being full of incompetence. Even (or perhaps especially) programmers at FAANG companies are just not very good at their jobs.
Check out the argument Casey Muratori got into with the Microsoft terminal maintainers about how slow the thing was. He got the standard claims about how "oh it's so complex and Unicode is difficult and he's underestimating how hard it is", so he wrote a renderer in a few hours that was orders of magnitude faster, used way less memory, and had better Unicode support.
File system fragmentation was a very significant problem when most people still used HDDs as their primary mass storage media. SSDs are far less affected by fragmentation because of much faster random access times, but HDDs and thus performance suffered.
The Windows Registry is an arcane secret not even Microsoft fully comprehends at this point, and it can get very messy if a user installs and uninstalls lots of programs frequently. This is, of course, a problem with uninstallers not uninstalling cleanly and not a problem with Windows or the users. With so much crap moving to Chrome online-software-as-a-service outfits, users aren't (un)installing as many programs as frequently anymore, but an unkempt Windows installation can definitely slow down over time.
Software in general also just gets more and more bloated as the moons pass. More bloated software means less efficient use of hardware, meaning less performance and more user grief over time.
But it doesn't really have enough RAM to run a modern web browser. A few tabs and we are swapping. That's unusably slow. A processor that's 5 or 20x slower is tolerable often. Working set not fitting in RAM is thrashing with a 1000x slowdown. And so this otherwise perfectly useful computer is garbage. Not enough RAM ends a machine's useful life before anything else does these days, in my experience.
Atom n270 netbook, go figure.
Also, run this to get a system wide adblocker:
EDIT: wrong URLOf course it can't run all today's bloated software, but we're talking about the operating system, here, not the applications.
making 10% of users unreachable in order to more easily reach the other 90%. yeah, it’s a fine business strategy. though i do wish devs would be more amenable to the 10% of users who end up doing “weird” things with their app as a result. a stupid number of chat companies will release some Electron app that’s literally unusable on old hardware, and then freak out when people write 3rd party clients for it because it’s the only real option you left them.
https://aiimpacts.org/trends-in-dram-price-per-gigabyte/
DRAM density and cost isn't improving like it used to.
Also memory efficiency is about more than just total DRAM usage; bus speeds haven't kept pace with CPU speeds for a long time now. The more of the program we keep close to the CPU -- in cache -- the happier we are.
in my previous job rather than give people root access to their laptops we had to do things like running a docker image that ran 7zip and we piped the I/O to/from it, and I'm not kidding we all did this and it was only bearable thanks to bash aliases and the fact that we had 16GBs of RAM
Nothing wrong with Tiny11 though, if you know what it is good at and use it for that. Namely, "offline" Windows for some appliance-like usage (e.g. factory controls, display screens, et al) when Linux won't do for whatever reason and licensing Windows IoT isn't possible (small business/personal project/etc).
The remaining content unique to WinSXS is either for cryptographic validation, app compat, or the driver stack.
WinSXS looks like a huge folder in explorer, because explorer's size estimates do not tell you about hard links. It's not that big. I need to question somebody who thinks removing it will remove a lot of bloat.
Any space reclaimed using dism’s startcomponentcleanup is only from removal of superseded updates which normally happens automatically whenever the maintenance task runs after a certain period of time.
Note that I explicitly consider backups separate from superseded updates. Superseded updates are kept for a period of time to allow the user to uninstall a newer update.
Get the size of every “file” in the volume along with the file id of each and then subtract the size of any files with a matching file id that are in the WinSxS. Then sum the size of the remaining files that were from the WinSxS.
You can get the file id using https://learn.microsoft.com/en-us/windows/win32/api/winbase/... or fsutil, etc.
You could also probably execute “fsutil hardlink list” for every “file” in WinSxS and then ignore any that list more than one result and sum the size of the remainder.
There are of course more efficient ways to do this but those are some quick hacks.
my laptop only needs to run a few things:
browser vscode steam the microsoft drawing app some office stuff sublime discord
which all update pretty regularly.
the age of the desktop app has been replaced by the age of the browser and electron based apps. i can imagine businesses who built their own set ups back in the age of the desktop app being stuck with it, but for the most part i dont think i used windows' backwards compatability anymore
* Steam (the root process, not the subsequent Chromium child processes) is 32-bit, as are a lot of games.
* Discord is 32-bit.
Discord for example is literally just a chrome-less Chrome; the zoom in/out hotkeys in Chrome still work in it.
This is also not mentioning how no Electron program ever visually adheres to the desktop environment it's running in.
No, it also includes:
* Voice (and text chat) overlay for games (DLL)
* Game integration via lobby & rich presence APIs
* Krisp noise cancellation (requires a DLL as well)
* (Better) screenshare (Chrome has an API for window sharing now but Discord's is a bit more robust with several backends in case one fails)
* System-wide keybinds
* Scripting support via gaming accessory apps (Logitech G HUB, HyperX, etc.)
It also (anecdotally) works faster than the web client, in my experience.
Just because the zoom controls work (which is an accessibility feature) doesn't mean it's a barebones Chrome wrapper.
Discord for instance has this “currently playing game X” feature. I have zero interest in broadcasting what I’m doing at the moment to the world, but many do and have this feature enabled. Good luck implementing that in a browser-confined web app.
An example: https://stackoverflow.com/a/39569062
basically a Proton/NixOS for Windows :)
Every time I try VS Code I just can’t commit. There’s a bit too much going on and it never feels as tight as Sublime.
I did just get a new Windows machine, so maybe I should try it on that.
I’m starting to get back into emacs recently though because I like fiddling with tools more than productivity.
Do you want steam to actually run any game? :D
I use an eeePC laptop with 2 Gb or RAM as my home computer. It's quite usable with Linux.
Not that I'm planning to install Win11 on it, but the assumption that 2GB is enough only for embedded devices is incorrect.
I think that's the point - some people have assumption that 2g is meaningless whereas others see it as HUUUGE amounts of memory. Never mind historically, let us consider what a modern phone can do with 2gb ram.
Taking up a lot of space on your drives for data to maintain backwards compatibility makes sense. Why, when not being actively used, does it need to occupy gigabytes of RAM?
There's no need, which is why it doesn't.
Here's the relevant quote:
> Moreover, removing the Windows Component Store (WinSxS), which is responsible for a fair degree of Tiny11’s compactness, means that installing new features or languages isn’t possible.
I'm not saying Linux is for everyone, but the kind of people creating and running these scripts really should have no issue daily driving Ubuntu or even Arch. Or if they desperately need photoshop or whatever, get a mac.
It's like watching people constantly go back to an abusive relationship.
The same can be said for those working on jailbreaks and the M1 Linux project, as well as all of the cracking/hacking scene. For some people, it's far more interesting and enjoyable to fight --- and possibly win --- than just "abandoning ship".
I'm pretty stuck to Windows as I need it to drive my home lab. I need to run Windows to
1. Get data from a old optical spectrometer. It was designed for optical endpointing of plasma etching. And one will have a hard time finding anything that is not running Windows in a fab (except lithography).
2. Run a 28 years old piece of software to acquire timestamps from a HP 53310A modulation domain analyzer
3. Grab frames from an old xray detector
4. Work with two NI DAQ cards. Yes, they are supposed to work on Linux, but I always get weird errors on my Ubuntu work computer while they never failed me on my Windows laptop.
5. Use Autodesk Inventor to prepare files for 3D printer/machine shop. Siemens NX used to work on Linux, but apart from that, there is not a single piece of non-toy 3D CAD software that I'm aware of support Mac or Linux.
6. LTSpice simulations and Altium Designer layouting.
Windows is the only first class citizen in many areas, software development and artistic work are two exceptions.
And so far, it seems I can still always be one step ahead of MS in the anti-consumer war, so I'm not too worried.
I'm kind of in that situation and I don't thing going with Mac and the Apple ecosystem really is better than trying to use Windows 10 as long as possible on an older Thinkpad.
Everybody who's using tools like Photoshop professionally has been "shaped" to feel well in the Adobe ecosystem. I doubt that's good but that's how it is.
Photoshop, Illustrator, InDesign, they all feel and work similiar which helps with transitioning/switching between these tools without big issues.
Now take Gimp, Inkscape and Scribus against that. Everything looks different and probably works different, too. I need to get work done, not learn three seperate programs. Also Scribus seems to be dead, latest Dev blog entry is from 2016.
Serif is doing great work with Affinity, but Adobe is still going strong and defines the professional industry. As long as that's the case we're stuck with Windows/Macs for professional work.
For the people who do want to use Windows 11, and who see it for what it is, it's pretty great. For the people who use Windows XP/7 or who stick to some minimalistic un-featured XFCE-running underpowered Linux machine, you do your own thing. No need to force that on everyone else.
for the non tech savvy - windows is still a great choice for those wanting to simply game and not learn something new like linux - these are the same folks that do notice a difference in OS being bloating and see ads, while asking for help knowing others know more; which a lot of us do not have the time nor energy to fully support a vast array of friends' systems. these debloated windows are great for those folks, and for me not having to /shurg and have people buy more hdd space for nothing.
was it not linus himself that mentioned that linux as a popular desktop os will not be a thing until manufactures who provide prebuild OS's (and support them) - ship them with linux? but in all honesty i fell that the X vs Wayland needs to be a bit more solidified, similarly with alsa/pulse/pipewire lol ; but those are different issues
Twenty years ago I had already been installing Windows XP to FAT32 volumes directly to be more compatible with W9x multibooting. I didn't know anybody else doing this (some thought it couldn't be done) but every time I installed XP you can see the names of every driver as it loads during creation of the pre-installation environment. The very last two drivers are FAT32.SYS followed by NTFS.SYS. I figured Windows might have first been made functional on FAT32 but launched with the intention of total migration to NTFS for most people as seen.
In my later experimentation I found that Vista would run from a FAT32 partition but default Windows 7 would not do it very easily, simply because the WinSxS folder (pronounced win-sucks) was oversized in an insidious way.
The W7 WinSxS folder size was bigger than Vista's but it did not approach the maximum size that FAT32 can handle.
Instead it was the un-necessarily stupidly long filenames which overran the long-filename handling ability of FAT32 early when there were enough of them. Like the best engineers would never have even considered doing at the time, much less go into production.
By judiciously deleting the majority of the contents of WinSxS (but not all by any means), W7 can be run from FAT32 as well without any functional shortcomings as far as my office was concerned.
The modern approach to testing this for yourself would be to install the default W7 to a regular NTFS volume, then debloat the WinSxS folder manually, perhaps in safe mode or when booted to an alternative OS so none of the files on the W7 volume are in use at the time.
Reboot to something like the W11 USB setup media, "Troubleshoot" to go to the command prompt (instead of installing W11), then capture (back up) the debloated dormant W7 partition manually using DISM.EXE.
Then later, on a freshly formatted FAT32 drive, apply the captured W7 system, again using DISM.
Create new boot files for the newly applied W7 system using BCDBOOT.EXE.
Boot W7 while it's on FAT32 and prosper.
Works not that much faster than on an NTFS volume, but if you can reboot to Windows 9x on a multiboot system, you can search the FAT32 W7 volume blazingly faster than when the identical W7 system searches itself while on NTFS.
Now of course all of this needs to be done in legacy BIOS mode since UEFI alone is not adequate for such continued full PC performance.
I guess I could have been playing video games instead but reaching this level seemed just as rewarding anyway.
Wonder if W11 would do this.
Edit: For extra credit I already put W11 onto old BIOS PC's without any GPT, with regular MBR like it was W10.
Bypassing hardware restrictions into smaller-than-recommended NTFS volumes using DISM.
i do miss the xp's cleanup hotfix cache button though
I hope you have found yourself the MSMG toolkit for you testings
This is not strictly true, theres tonnes of reasons to use windows, for example I run thousands of gameservers.
If I can shave memory usage of the OS that translates to a lot of cost savings.
Windows XP/Vista/7 and soon: 10 being EOL does force me to upgrade.
But since a few years back, most games I were interested in ran perfectly fine on Linux. I haven't rebooted into Windows for almost a year now. So I think I will, instead of upgrading to 11, eventually delete it and use the second SSD to hold my games on Linux and won't look back.
I remember the days I have been building a bare metal recovery for some of our Windows systems using WinPE, imagex and Python. There was this feeling of sane people pouring into M$ to modernize the OS a little bit and cool stuff came out. But in the end, it's still the same inscrutable mess it always were. Nowadays with more and more ads and unnecessary fluff that gets in the way.
But... Windows 11 is just... annoying. The UI is worse than 10 in all the ways that matter to me. So I finally put Linux Mint on this laptop, and it's been pretty good. Not flawless, but really good. By default, I install and play games on Steam.
Notable exception is Anno 1800, which has a clunky multiplayer setup anyway, and just doesn't connect under Linux, but works (begrudingly) under Windows.
Northgard has been awesome, but just tonight I had a bunch of server connection issues - can't 100% blame Linux, though 15 minutes into a multiplayer game, I was dropped while the two Windows players kept playing. But it's not conclusive!
At any rate, I think for many PC gamers, Linux gaming would work, though it's still not 100% "install, join, play" for every game.
It's just a stable version frozen in time but heralding it as the bloat-free alternative is no longer true.
I have access to it through work and I gave it a spin recently but it's no longer what it used to be.
Which one? Or I guess it's not a public release?
I have installed IoT 2021 (Windows 10 IoT Enterprise LTSC 2021 Version 21H2) like 3 weeks ago there was nothing, no App Store, less telemetry (there is some but significantly less than on the "normal" Windows versions, but I reinstalled the App Store so maybe because of that)
*Edited the wording
It looks like this as a fresh install https://ia904606.us.archive.org/11/items/en-us_windows_10_io...
I will check which ISO it was and reply back here because I'm very curious now too.
Now, of course, the struggle is to see how I can get the ISO and get it activated. I'm woefully out of the loop.
Well it's the users that are uploading but yeah Archive.org has an insane amount of stuff
I use KMS activated non-IoT LTSC 2021 on my obsolete Surface. MS will not sell that edition to "consumers" like me, so I don't feel guilty at all for pirating it.
Unlike usual Enterprise editions, I don't think IoT Enterprise SKU will work with KMS. The only possible activation option seems to be with a PKEA key.
For a machine that never sees the internet, the IoT version runs in Deferred Activation state. So it is useful if it's for a intranet machine that never see the outside.
What about Windows 11, does something like "Windows 11 IoT Enterprise LTSC" exist also? Would that be equally good/debloated ?
Thank you for the tip btw!
I just use Intel’s Clear Linux so… meh.
The main difference that IoT has longer lifecycle support, the activation method is different (but you don't even have to), and the IoT version is only available in english
But it doesn't really matter they are virtually the same
EDIT: spelling and attempted to adjust elegance
So far Anno 1800 has an issue where multiplayer games only connect if I play on Windows (but single player runs flawlessly in Linux Mint.) Every other game I've played has been great. StarCraft II (in Bottles), Conan, Valheim, Northgard (so far.)
PC gaming on Linux is not perfect, but it's really damn good.
(This is on a Ryzen 7 + Geforce RTX laptop)
It got multi-day battery life out of the box, which is far in excess of what Google advertises for that hardware.
Once I installed google play services (which have zero end-user benefit, other than enabling compatibility with apps that have bundled google spyware), battery life more than halved, bringing it in line with what Google claims.
I suspect anti-trust and consumer protection lawsuits would start flying around if more people realized that over 50% of their phone battery was there to support malicious bundled code.
(Also, third party implementations of maps, such as organic maps and here we go can install and run fine without impacting battery life when they are not running.)
The answer is that the actually-useful features are bundled with mandatory malware that does need to run in the background in order to implement 24/7 surveillance. That bundling clearly violates US antitrust law.
Also, I suspect most people buying > $1000 phones would be willing to pay 10’s of dollars for lifetime licenses for maps, pay and cast (which is roughly what they would cost as standalone products), especially if they were privacy preserving, and doubled the phone’s battery life.
Take gaming for example, I pretty much only use my PC for gaming (I prefer my Mac for general purpose stuff) and there is a lot there that is really unnecessary. But where this really becomes an issue is on devices like the Steam Deck.
I installed Windows 10 on mine, used a debloat script to remove anything that was not strictly necessary for gaming, downloading games, and related tasks and I was able to get better performance and battery life for the same games than I did under SteamOS.
While I imagine that this would complicate testing of updates to support these separate purposes, it feels like Windows is trying to do to much all at once.
However I also recognize that much of what I removed is also things like telemetry that I doubt they would remove.
They don't care. This won't bring them money. Showing you ads and tracking you will, so they'll continue doing it.
A lot of the time I feel like you end up with having to do a lot of research for a very minor practical effect.
Basically what I did was I started with the default and then unchecked (or checked? I don't remember what the UI called for now) anything related to Xbox and the Store and I didnt have any issues.
I also did a comparison before and after and it was actually a pretty decent improvement. About a 10fps improvement over SteamOS and a Normal Windows 10.
For me the biggest incentive was being able to play xbox game pass games and not needing to worry about any compatibility issues with Proton which is why I went down that route.
But yeah your second part is very true. I feel, the impact is minimal if you are on a traditional PC. But on something with such limited resources like a Steam Deck, the difference can be going from 40 fps to mid 50's and a few more minutes of battery life.
But it isn't something I would recommend most people do. More just kinda pointing out that with the effort I think Microsoft could make a lean Windows really just by taking a look at what is actually necessary to be run for specific tasks.
Possible by making your own custom image using dism. Everything debloat scripts are doing can be done before even making an ISO.
Why though? Microsoft is a business, Windows is a product. It works well as a product, sales is good, deployments are many. Why should they reconsider the current strategy?
A fresh take on the desktop given the monstrous devices that are now available in terms of cpu, memory etc may completely redefine the footprint of personal computing
I wish your idea took off, but the modern "developer" (even with insane amounts of funding) seems capable of only writing memory-hogging garbage.
They’re perfectly capable of writing good software but actively choose not to. 1Password is the perfect example - lots of money, good engineers, a team who proved actually did write beautifully implemented native applications. Then they switched to Electron so they could avoid double-handling, and now users are faced with a laggy, buggy, janky, resource-intensive application.
It used to be the price of the hardware. Partly it still is: people won't use your software if they can't afford machines that are able to run it. But hardware gets cheaper.
It's now more about power consumption: both as price of electricity and as battery life.
But usually the "waste of resources" is "less time spent developing", that is, the users get to use some capability faster
There are applications that are extremely vulnerable to energy saving crap. Anything real-time is simply going to need to consume more power. Waiting 15.625 milliseconds completely breaks some applications.
You will take timeBeginPeriod(1) and friends from my dead, cold hands.
Except your options in Windows 11 are "Buzzfeed style widget, remove all sensible taskbar configuration, add extra steps to any context menus, insert ads for OneDrive, Office, and Candy Crush at every opportunity!" Who would add them? :)
But seriously, the way Windows Server handles it is just great. Windows 11 could potentially have a more minimal install.
I'm on Windows 10. After running O&O ShutUp10, this OS has been as good as Windows 7. I can do all my software dev, gaming, video editing/graphic design, etc. on this operating system without issue. I don't think it's ever crashed on me. It genuinely makes me curious what kind of issues people are running into. After all these years being a power user of Windows, so far it's been a smooth ride. The last time I had trouble was with Windows XP and Windows ME before that. I skipped Vista and Windows 8 (except for working on netbooks at Intel back in the day).
I tried Windows 11 in October 2021 and it was an awful mess. Tried too hard to appeal to MacOS and Linux users when that ship has long sailed. Not sure what state it's in now, but I've got no plans to upgrade until I either buy a new device or until Windows 10 gets some serious security vulnerability that's not on Windows 11.
Compare that to Linux, 10gb with a full gui, and minimal bloat after the fact and it’s extremely frustrating.
Both my teenagers have 2x 1TB NVME drives installed to deal with the insane requirements of Steam, Epic and Xbox gamepass games.
We live in Australia with ~25Mpbs download FTTN, so installing and uninstalling is a huge pain and isnt practical.
I have a similar issue with Apple selling Macs with 256GB hard drives! Even with iCloud photo and docs offload, these Macs are close to useless as you'll constantly bump into storage issues.
Steam is another beast, you could consider a steam cache server or similar, or alternatively teaching your kids how to xfer unused games from primary storage to secondary, and drop a 6-10tb drive in each machine.
I got some keys second-hand for Windows 10 Enterprise LTSC a few years ago, installed it on some ten year old hardware at the time, and I was honestly surprised how responsive Windows could be absent (to the best of my knowledge) the telemetry software, Cortana, etc., and how fast it could boot. It's almost like the true blue good Windows experience without all the nonsense is secretly reserved for only business customers and pirates.
https://youtu.be/Nh7po_P8qNU
FWIW, I use Alpine Linux on my pinephone in the form of postmarketOS (an Alpine derivative) with a full-fledged KDE desktop, running Firefox alongside. IOW., you can use it as daily driver just fine, just need to install the respective packages - which naturally makes it use more resources, but even then far from what Windows will use.
In my experience alpine is a good fit from anything between an OCI container all the way up to a full fledged desktop or your server. But that's not all, you can have it running on your rpi or even your smartphone as architectures like arm are really a first-class citizen something relatively uncommon even with popular distros like arch which has only a fraction of its packages available for different architectures.
Alpine may come pretty bare-bones by default but don't let that fool you, its more than capable for at least anything a regular distro is if you know what to do with it. Even if you're a casual linux user you can get it setup in no time by using the setup-* commands that it ships with eg setup-desktop which takes care of setting up a desktop environment without you having to worry about dbus,seatd,compositors or things like that. Also their repositories are filled with almost any package someone would need and can always be coupled with complementary package managers like nix and flatpak in cases where apk isn't enough.
I love alpine and the aforementioned causes do justice a fraction of reasons, especially when considering things like running on a much leaner and modern c runtime musl instead of glibc, being systemd free and having a minimal dep, bare-bones/ bloat-free philosophy as it was originally intended for use on constrained embedded devices like routers. Its one of, if not the best distros available in my opinion, amongst nixos and gentoo which I deeply respect as well. That being said, one has to factor in the drawbacks some of these features like systemd-free and musl imply when assessing combatibility but I'm having trouble remembering cases where I've ran into deadlocks even on exotic setups like alpine on aarch64 architecture running natively on a M1 macbook with a custom kernel like asahi-linux or a sdm845 oneplus 6T smartphone with pmOS.
Alpine with XFCE + dhpcdp-ui as a "WiFi seeking menu" would run circles around Windows 11 using 1/10 of the same RAM. With Bluetooth support with Blueman and everything.
https://bellard.org/jslinux/
Obviously basic component activation is functional otherwise the shell wouldn't function (my biggest problem with WinRT/UAP: its insidious creep into the OS "internals" rather than just powering apps, widgets, add-ons, whatever on top of the base system), but I'm not sure how many apps you might pick at random from the app store will still work.
It's crazy how things that were considered basic many years ago so run well within the performance we have in modern systems, yet the basic system has requirements much higher than it used too.
Teams is a massive example of this, is just text chat and video conferencing, stuff that was easily done 10 years or more ago, yet there are plenty of systems available today that run it like crap, let, alone imagining to run this on a ten year old system.
In a way, I think this reflects how life works in general. The way I see it, life expands until there's significant hindrance, or resources are exhausted. I don't mean it in a cynical way, like how Agent Smith does in the Matrix, regarding humanity, I just think that this is the nature of life in general.
https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law
I don't know what went wrong but Windows 2000 was perfect, with 7 being almost as good.
This got me thinking - I wonder what's the largest commercial entity that I'm considering good in this regard.
[1] - https://news.ycombinator.com/item?id=34647699
0- Did they also remove telemetry and similar malware?
1- Is it usable for gaming? I mean, didn't they also remove anything important among the cruft? I have memories of shrunk XP "distros" back in the day that were hacked to the point they refused to run a lot of software.
FTA: "This OS install “is not serviceable,” notes NTDev. “.NET, drivers and security definition updates can still be installed from Windows Update,” so this isn’t an install which you can set and forget. Moreover, removing the Windows Component Store (WinSxS), which is responsible for a fair degree of Tiny11’s compactness, means that installing new features or languages isn’t possible. If you install and enjoy Tiny11, we guess you will have to look out for ISO updates as major feature revisions of Windows 11 arrive."
I don't know if I can consider this "bloat" removal.
That’d take well under 1MB.
I won't buy a license for a windows gaming box, because it's trash. I would pay for a non-trash version of Windows.
My vastly less powerful Manjaro arm Laptop with the same setup idles at 160MB.
Most of the RAM seems to be used by systemd btw.
memory was basically a non-issue unless i was trying to compile a large package, i don't recall the precise baseline it had after boot but it was probably around 12-25%
I can get a fully functionable desktop with GUI apps, generic hardware support (i.e. not locked down to my hardware), support for dynamic modules/drivers/libraries, audio, 3d-accelerated video, and more in a 300 MiB footprint (with only basic iso image compression) and runnable with 128 MiB of RAM.
Then comes in the last part of your statement: "do some very basic web browsing." The system above works just fine with a browser featuring < 201x tech, with great CSS, JS, HTML support. But if I need to build and bundle the latest Firefox, Chrome, or whatever without manually stripping out a ton of features (beyond what is available via distro package managers), that footprint triples or quadruples in size and the memory requirements skyrocket.
Changing the position of the taskbar like this is "OS Smell", and I can't ever see 11 becoming more than the next ME/Vista/8.
"Nobody uses that feature, what are you, a Linux user!?" /s
I remember using build tools to strip down Windows install CDs/DVDs back in the day to get the most performant and minimal installation possible.
I use Windows and Linux, for privacy concerns. If you want privacy, go for Mac (not that I'd do it). On mobile, still working out what are the best options.
For instance, Microsoft engineers have the ability to pull arbitrary files off Windows 11 machines, at least according to Microsoft press releases from a few years ago. Doing so required “managerial approval”, and was “only for debugging software faults”, but anyone vaguely familiar with the US CLOUD act knows that they’re legally required to provide the same access to law enforcement searches.
I seriously doubt it. Do you have a source for this?
Not that I'm complaining though, I don't need Windows anymore and ZorinOS serves me just fine while not making me feel like I'm in a tech prison.
I love slim systems so I'd really like to trim some fat.
It's got everything I need and more. Ubuntu was a horrible experience for me, from the UI polish to the general UX.
Here's a comment from me on Reddit, on why ZorinOS is a better choice than Ubuntu: https://www.reddit.com/r/zorinos/comments/10qtzi4/comment/j6...