Developing inside a virtual machine

(blog.disintegrator.dev)

182 points | by disintegrator 6 days ago

34 comments

  • cranium 2 days ago
    (Tangentially related) I had to run the desktop version of Excel to develop a quick VBA macro for a client. Problem: I've been developing on a Linux box for years and the idea of leaving my cozy dev environment for a plain Windows install gave me chills.

    After failing to install Windows in a VM (thanks TPM), I found a way to run Windows apps nearly natively (https://github.com/winapps-org/winapps). It works by starting a Windows docker image and streaming the application frame with RDP. As the RDP client handles the copy/paste and other niceties such as shared directories, it's way easier to integrate in my env than the other options.

    • rescbr 1 day ago
      This is great!
  • nneonneo 2 days ago
    You can use pbcopy/pbpaste in a Linux VM on Mac by making a shell script wrapper in the VM that calls “ssh mac-host pb{copy|paste}” - that is, basically ssh back from the guest to the host to use its clipboard. It’s seamless and fast since it’s basically a local network connection.

    My specific setup is that I use an authorized_keys entry on the host that restricts the guest to running a specific command, which limits what a compromised guest can do to the host. The command is set to a script that has a list of specific permitted actions. This is a good option if you’re looking for a bit of additional isolation between host and guest.

    • divbzero 2 days ago
      > I use an authorized_keys entry on the host that restricts the guest to running a specific command, which limits what a compromised guest can do to the host. The command is set to a script that has a list of specific permitted actions.

      That’s a neat trick, thanks for mentioning this.

        command="command" ssh-ed25519 ...
      
      would be the authorized_keys entry and I’m guessing the script would read the SSH_ORIGINAL_COMMAND environment variable to determine which action was intended.
      • nneonneo 2 days ago
        Yes, that's exactly it.

        My authorized_keys line looks like this:

          command="${HOME}/bin/fromvm vmname",no-port-forwarding,no-x11-forwarding,no-agent-forwarding ssh-ed25519 ...
        
        I give each of my VMs a different name and key, which lets me identify them for the purpose of e.g. constructing ssh:// links for remote editing.

        The actual script uses $SSH_ORIGINAL_COMMAND, exactly as you've described, which means that while the guest thinks its executing e.g. "pbcopy", the host "fromvm" script is actually receiving "pbcopy" in the $SSH_ORIGINAL_COMMAND and can apply the appropriate access control or restrictions.

    • hamandcheese 2 days ago
      In the past, I set up something similar, except I would reverse forward my local ssh port to my remote servers (so that I could easily ssh back regardless of network topology). Ultimately I didn't keep it out of security concerns -- I had done nothing to limit the commands.

      On the topic of limiting the possible commands - for my use case I only needed pbcopy. Maybe think twice before letting an insecure VM or remote host read your clipboard contents with pbpaste.

      • nneonneo 2 days ago
        Yep - good point. Another option would be to set a confirmation in front of every paste attempt - for example, putting Touch ID in front of any pbpaste call from the guest (which you can enforce with the authorized_keys command). That should be low-friction enough that it isn’t a major delay to your development process, while still being reasonably secure and providing the convenience of pasteboard access.
    • qazxcvbnm 1 day ago
      I do something similar, and one more tip is to remember to provide your pbcopy and pbpaste with `LANG=en_US.UTF-8`, or else non-ASCII will be garbled.
    • fulafel 2 days ago
      Having that kind of ssh access from guest to host negates the security barrier benefits that using a dev VM might have.
      • nneonneo 2 days ago
        Please read the second part - I use a command restriction in authorized_keys so that the guest can only call certain commands.

        Yes, the guest has “unlimited” access to the pasteboard, which does introduce some risks. For example, the guest could set a malicious command line that you paste into the terminal - which is generally mitigated with paste bracketing in zsh, vim, etc. It definitely weakens the isolation to a certain extent, but I don’t think it completely negates the security barrier as you claim.

    • disintegrator 2 days ago
      Brilliant tip! I’m going to give it a shot tomorrow and update the post (with attribution).
    • bartvk 2 days ago
      iOS devices share a clipboard with macOS. It would be cool if that could be implemented on Linux.
      • sangnoir 2 days ago
        Linux had this for more than 10 years with KDE Connect phone app. KDE connect has a bunch of other neat tricks, like letting you phone act as a touchpad for your computer .
  • apt-apt-apt-apt 2 days ago
    I accidentally typed 'npm install axioss' (extra s typo) this morning.

    When it successfully installed, it was terrifying to think that all source code, private files were instantly shared with malicious actors. Not only that, there was the prospect of having to somehow wipe and ensure all files were clean, reinstall the OS, and the possibility of some bootloader remnant still lurking.

    In this case, it seems that a security package had replaced a previous malicious package, making this instance benign. But it feels like I am only one typo away from an absolute catastrophe every time I install a package.

    VM seems like a good way to add some protection.

    • homebrewer 2 days ago
      You can wrap node and its package manager into something like bubblewrap, which will remove access to basically everything but the project root directory (including your home directory with its browser profiles and ssh keys).

      I use this script with an additional seccomp filter that also denies access to privileged syscalls, but I don't remember where the filter came from, so I won't post it here — you won't be able to audit it easily as it's basically a compiled binary.

      https://0x0.st/8zWK.sh

      Place the script anywhere and create symlinks named 'node'/'npm'/'yarn'/etc pointing to it, putting them into the start of your $PATH. Run your commands as usual. Use a 'bash' symlink to see what it looks like inside the sandbox.

      It's not as good as a VM, but much more convenient.

    • throw5959 2 days ago
      Bun package manager (compatible with NPM) doesn't execute any code during package installation.
      • diggan 2 days ago
        Except you "need" things like postinstall lifecycle hook for some things. So you add the specific package you wanna download to trustedDependencies (like you'd need to do with node-sass for example), and then we're back to it executing code after downloading, making compromises to upstream dangerous again.

        A lot better than npm that lets any package run postinstall for sure, but as always there are no silver bullets.

        Apparently there is also a default list of packages that are allowed to run scripts on download with Bun, FYI https://github.com/oven-sh/bun/blob/main/src/install/default...

        • throw5959 2 days ago
          It allows you to separate these steps and only execute the unsafe ones in a container but not having to do everything in there.

          Thanks for mentioning the default list! Good point.

    • jeswin 2 days ago
      > VM seems like a good way to add some protection.

      Yeah, but someone should try to fix this anyway. It's not a nodejs-specific problem, but it's badly needed in node. Any of the 100s of authors whose packages I depend on might have made a typo, or just been careless. Software development requires a scary level of trust.

      I am also increasingly moving to VMs. I want tools (such as VSCode) to run on the main machine, but actual execution to happen in the vm. It's a bit painful and a drag on productivity, especially debugging.

      • skydhash 2 days ago
        The one trick I found that work well is to move everything in the VM. I usually opt for either emacs or vim, and if I need an IDE, I install i3. It just takes a moment to copy my dotfiles over.
        • quectophoton 2 days ago
          I'm doing something similar.

          My development environment for work is defined in a Dockerfile, and I have a small shell script whose only purpose is to call `docker run` with that image, mount a few volumes for caching, mount the CWD in "/workspace", and start a shell in there. Development is done with nvim.

          If I need Docker Compose, I run it from the host. For projects that I find unpleasant to work with in this way, I use GitHub Codespaces. I hadn't thought about using a GUI IDE from within a VM, so thanks to your comment (EDIT: and also the submitted article) I now have something new to try!

      • fpoling 2 days ago
        VSCode assumes that the remote side is trusted. So if VM is compromised, VSCode on the host can be compromised as well.

        For this reason I run VSCode inside the VM.

        • MajesticHobo2 2 days ago
          Can you point me to some documentation or proof of concept for this? Would definitely like to change my workflow if this is the case.
      • superb_dev 2 days ago
        Have you looked into something like Qubes OS?
  • Abishek_Muthian 2 days ago
    I do a switcharoo, I develop for iOS inside a macOS VM with Linux host.

    After a 5 years hiatus I started developing mobile apps again and I was frustrated to learn that Apple doesn't allow renewing the developer license on web anymore, I don't own a Mac and even Apple developer app on iPhone didn't allow me to renew my license.

    After I signed into a macOS VM, I was able to renew my license through Apple Developer App on iPhone as macOS version of the app requires T2 chip.

    Now I have PTSD flashbacks of why I left mobile development in first place.

    • theoreticalmal 2 days ago
      Can you share more info on your MacOS VM setup? I’ve managed to set one up precisely once with proxmox, and getting iCloud/imessage to work required me to contact Apple support
      • Abishek_Muthian 2 days ago
        I used Kholia's scripts[1] on Qemu with Virsh. I didn't have much trouble other than some SELinux permission issues which I resolved quickly.

        iCloud/imessage have always been finicky with Hackintosh, but in my experience setting the correct serial number with appropriate Mac model is the key to resolve those issues.

        With just couple of years of Hackintosh scene left before support for x86 Macs are dropped completely, a VM Hackintosh makes more sense to me than building a physical Hackintosh.

        [1]https://github.com/kholia/OSX-KVM

        • LeFantome 2 days ago
          Even a VM will stop working soon as Intel support is removed from the OS and apps start demanding newer OS versions.

          Or is it possible to emulate Apple Silicon on a VM now?

          • Abishek_Muthian 1 day ago
            Yes macOS VM will also be useless too in some years but a useless VM is better than a useless purpose built computer for hackintosh.

            Qemu is capable of ARM emulation but I don't think its capable of emulating sophisticated SoC like Apple Silicon.

  • TowerTall 2 days ago
    I run my entire work computer inside of a VM. I work from home and have a powerfull desktop i use for my private stuff hooked up to my 3 monitors. My work pc is a vmware vm running inside of vmware pro. I can minimize the work vm and work it out my sight or I can selective choose that the vm should use 1, 2 or 3 monitors and it is easy to switch back and forth between work and private without have any work related data on my private desktop. The work vm is domain joined, o365 enterprise joined and loocked down in ridicilous ways by cooperate IT but now i can run that from my private powerfull PC without worring that corp IT messes up my private computer.
    • vladvasiliu 2 days ago
      I was thinking of doing something similar, especially since work mostly insists on running Windows. Do you use conferencing software or such from the VM? If so, how's the performance?
      • TowerTall 2 days ago
        I started working from home around 12 years ago and have been using this setup in various incarnations. I experience zero issues related to running work from inside of a VM. I work for an MS Shop, so we are using teams.

        The main issue with my setup is that VMware or Windows 11 (my host OS) can't use the GPU when rendering the UI of the VM (I'm not sure if VMware or MS is to be blamed or both) despite having a discrete GPU card installed.

        The rendering of the image must all be done by the CPU, which requires a lot of RAM.

        After switching to 3 x 4K monitors, the VM requires 62GB of RAM to be able to run on full screen on all monitors (63720 MB to be precise). I recall that I somehow managed to get it working while "only" using around 32GB, but it became unstable. 62GB is the sweet spot where everything runs smoothly. Haven't tried to adjust the settings in years. It was a pain to get working in the first place with 4k monitors, and since I have 256 GB installed, I just left the settings as they are.

        At next motherboard upgrade I might revisit, but I think that VmWare still can't take advantages of the host GPU in this regard so I expect the RAM requirement to stay.

        Some computer problems can be solved by throwing hardware at it. This is one of those. Give VMware and Windows an obscene amount of RAM, and you can have Teams running smoothly and flawlessly inside a VM on a Windows Client Host in 4K.

        • intelVISA 2 days ago
          Weird, I pass through a dGPU for work VM sometimes on Win10/11 @ 2K without much memory usage except I don't use MSTeams but it doesn't surprise me that it needs 60GB RAM nowadays.

          I'm jealous of your system's 256GB, my memory controller looks to max at 128GB but it's kinda old now (DDR4).

          • TowerTall 2 days ago
            As I understand it, when connecting to the vm through the VMware console viewer, since the desktop image of the remote computer is rendered within a VMware process, which is CPU-bound, only the CPU can handle this task.

            Spanning multiple 4K monitors demands significant RAM to handle the large aggregate framebuffer size and the associated overhead for rendering and display synchronization.

      • TheTxT 2 days ago
        I did the same thing for many months running teams. It was about what you expect from teams. I didn’t have any other significant issues, but eventually stopped using this setup due to increasing security requirements of the VPN software.
  • moritonal 3 days ago
    Thr Dev Container ecosystem for VsCode really is quite impressive at the moment. All your dev dependencies, wrapped up in a docker image per repo.
    • psyclobe 2 days ago
      Heh if you can work around ALL its weird quirks, especially on windows.
      • elcritch 2 days ago
        Having supported team members where we were all running this setup there was constant fiddling with docker. Containers would freeze and often required restarting docker directly. Especially with WSL on windows.

        It’s slick when it works.

        • moritonal 2 days ago
          I noticed similar things, and found most these problems went away with more memory, and less usage of the "{}.features[]" in the `devcontainer.json` file.
      • urronglol 2 days ago
        What quirks have you come across. The only annoyance for me was ssh agent having to uninstall the windows store version
    • invalidname 2 days ago
      100%. I've been using that and also the DevContainer support in IntelliJ/IDEA which is good but has some limitations (e.g. I can connect IntelliJ but not CLion at the same time).
  • mongol 3 days ago
    I do something similar but using WSL on Windows. But something I really, really hate is dealing with special certificate handling required to pass the corporate Zscaler proxy. I think it works somewhat transparent on the Windows host, but repeating the setup in every VM is such a pain.
    • UltraSane 2 days ago
      I have administered Zscaler and I bet the issue is that Zscaler is doing TLS MITM and every windows machine joined to the domain is configured to trust the Zscaler wildcard cert used for every site. This usually just works for anything joined to the domain but the cert has to be manually trusted for anything else. And yes it is amazingly annoying. I try to write a script or bake the cert into an OS image.
    • deergomoo 3 days ago
      Heh, my employer is rolling out Zscaler this year. The limited trial a few months ago was hell for folks using WSL primarily, with Docker images adding an additional layer of pain.

      The people in the trial got very little done until it was decided to pause it, and I do not have high hopes for when it’s tried again. It strikes me as basically running malware in the name of security.

      • UltraSane 2 days ago
        I worked at a government agency that used Zscaler to perform TLS MITM inspection. You have to create a tunnel to a Zcaler datacenter and send all your traffic to them encrypted with a certificate they provide so they can decrypt it. Then they encrypt it again and send it on its way. It can detect things that otherwise could not but you are putting a LOT of trust into Zscaler security because anyone who hacks them can see EVERYTHING you are doing. And it is a HUGE waste of processing power and joules. You can create exceptions for URLs and source IPs.

        I much prefer filtering on the endpoint before TLS encryption.

        • bitwize 2 days ago
          You'd think last year's Clownstrike incident would put the lie to the efficacy of the fucking-for-virginity approach to endpoint security favored by organizations but no.

          At the enterprise level, security isn't really about security, it's about having an audit trail so bad actors can be caught after the fact.

          • UltraSane 1 day ago
            It is like hiring bodyguards. Bodyguards could kill the person they are protecting at any time BUT they have an economic and legal incentive to not do so and so you bet that the odds of being killed by bodyguards is far lower than by some random stalker.

            Like wise giving Crowdstrike root access to everything is a bet that you will on the whole be more secure than if you didn't and for most companies I believe this is true. But if you are Google or AWS you are going to be able to do better than Crowdstrike.

          • daghamm 2 days ago
            You would be surprised how much of corporate cybersecurity is done like this. It has not in anyway improved sine crowdstrike, on the contrary EDR shenanigans has probably grow 100% since last year.

            These security companies must have really good salesmen. Or maybe IT departments are always ran by clueless fools, who knows?

            • screcth 2 days ago
              The security team cares about minimizing risks to the company and to their own careers.

              Deviating from what everybody else is doing makes it so that the burden of proving that your policies are sane is on you and if anything bad happens your head is the first to roll.

              You use CrowdStrike and the company lost millions of dollars due to the outage? That's not your problem, you applied industry standard practices.

              You don't use CrowdStrike and the company got hacked? You will have to explain to the executives and the board why you didn't apply industry standard practices and you will be fired.

            • vladvasiliu 2 days ago
              > Or maybe IT departments are always ran by clueless fools, who knows?

              I think IT has its fair share of clueless fools, but what I've noticed is that when the "security department" is separate, people there tend to have no idea what they're talking about and rely on checklists. Plus, "everybody uses X, that means we're missing out".

              • MaKey 2 days ago
                Corporate IT security seems to be mainly about checklists and compliance, not about actual security.
                • mrguyorama 2 days ago
                  There's no reason to do anything else. Nobody has gone to jail as of yet for not securing their company, and even "security" companies that get utterly popped still have plentiful business a year later.

                  There is no legal incentive to do good security. There is no market incentive to do good security. Why is it so surprising to people that we have abysmal security?

                  • vladvasiliu 1 day ago
                    In my case, it's surprising because companies waste a ton of money buying snake oil and aggravating their users for next to no benefit. You'd expect companies that "only care about their bottom line" to optimize this away, yet they don't.
          • gruez 2 days ago
            >the fucking-for-virginity approach to endpoint security

            ???

            • bitwize 2 days ago
              Compelling users to have software indistinguishable in its operation from malware running on their machines for security purposes is, as the expression goes, like fucking for virginity.
        • rawgabbit 2 days ago
          I knew Zscaler did MITM. But I thought it only inspected hashes or summaries to detect malicious content. I didn’t know it would encrypt again.
          • klooney 2 days ago
            They even do per-service stuff- their big AI feature is that it will detect people pasting social security numbers or other PII into ChatGPT and block it.
          • gruez 2 days ago
            >I didn’t know it would encrypt again.

            "encrypt it again" in this case means establishing a new TLS connection to the original host and forwarding the decrypted contents in this new connection. This is obviously required if the original host only had a https endpoint, and (more importantly) so the traffic isn't exposed on the wider internet.

      • sieabahlpark 2 days ago
        [dead]
    • chrisweekly 3 days ago
      Given how much you hate it, any chance you documented how you did it?
      • mongol 2 days ago
        No not really. The employer has some documentation, it is not complete but is starting point when issues pop up. For example, a JDK neeed to have special certificates installed if Java tries to talk SSL. And when you juggle with different JDKs for different Java versions it becomes a nightmare. The best you can do is to try not to touch anything when it eventually works but eventually something unforeseen breaks it anyways.
      • Atotalnoob 2 days ago
        All you really need to do is install the zscaler cert in the appropriate trust store.

        It’s really a 1 step process in your dockerfile or other location.

  • crabbone 2 days ago
    But why tho?

    The only kind of plausible explanation the author gives is that it's "more secure" because the imaginary attacker will have to take an extra step to get the password from the VM instead of the host OS? -- This seems like such an inconsequential / worthless benefit to jump through the hoops of running things in a VM...

    Like... I wasn't sold on this approach from the get go, and this pitch makes it sound like I was right all along?

    Other non-starter "bonuses" include not installing developer tools on your laptop that you have for... drum roll... development. Why? It's sole purpose is to be used for development, why not install development tools on it? Just doesn't make any sense...

  • disintegrator 2 days ago
    Author here. Thank you for all the tips. I especially like the idea of using ssh from guest to host to enable pbcopy/pbpaste and open.

    Now I know what all the WSL users experience seamlessly with their setups. Glad I have something that comes close.

    • StreakyCobra 2 days ago
      Thanks for the post, an interesting read!

      Side note: I checked out your other blog post, and it resonates with my own first post, which I wrote just two days ago: https://fabiendubosson.com/blog/overcoming-perfectionism/. You’re definitely not alone in battling anxiety, perfectionism, and procrastination when it comes to blogging. Keep writing! :)

      • disintegrator 2 days ago
        Thanks! Really appreciate your comment :)
  • mmwelt 2 days ago
    This all looks fine for developing server apps that don't need a GUI, particularly as long as 3D accelerated graphics aren't needed. You don't even need to be using/developing a 3D game or application, just using a modern GUI without too much lag now seems to require 3D acceleration.
    • weikju 2 days ago
      3D acceleration is pretty well supported in VMware and sorta works well in UTM
      • 01HNNWZ0MV43FF 2 days ago
        Complete pain in the butt for anything else like qemu
        • weikju 2 days ago
          UTM's backend is qemu, isn't it? so one could check what they're doing
  • zokier 2 days ago
    I use similar setup on Windows (with vmware/virtualbox/hyper-v at different times), which kinda highlights one additional upside: it doesn't matter that much what the host system is, you can do your work all the same regardless if it is macos/windows (or even linux). As long as it can run the vm and vscode, you are good to go. Although admittedly Apple going with aarch64 throws a small wrench in the equation.

    It is especially nice in corporate environment, where the host system is generally managed by IT and the devices are largely impersonal (standardized configuration, standardized software). You can carve out a corner to make your own and work there. <insert rant on ineffective corporate IT>

  • gbraad 2 days ago
    Consider using tailscale drives to expose certain folders using webdav. Have been doing this for de containers and VMs since it got demoted and since has replaced using winscp or other ways to share files
    • disintegrator 2 days ago
      Just trying this out and it seems amazing so far. Thanks for the tip!
  • lizknope 2 days ago
    Are most companies this flexible in allowing developers to install whatever they want?

    > My physical machine is a 2023 MacBook Pro with M2 Pro CPU

    > I’m using VMWare Fusion Pro

    > Quite often I’ve found developers frowning up Ubuntu and preaching for folks to use NixOS, Arch, Debian or other distros. The reality for me was Ubuntu was the fastest way to get set up and now

    I'm in integrated circuit / semiconductor design. At every big company over the last 30 years we are given a computer and we can change the desktop environment but we aren't installing our own operating system.

    The people I know in software have a common OS, compiler, and build environment. They aren't dictating what text editor you use but you aren't working on projects individually but together.

    So if everyone at the author's company is doing their own thing do they have problems integrating all the code together? "Oh you used version 2.3.4 but I used version 2.4.7 which fixed this issue, what are we using to ship with?" Or is this not a problem?

    • bluehatbrit 2 days ago
      I work mostly in web / server side development, it's not really a problem I've had for a number of years now. Some of my colleagues use various linux distros, others macos. No one is using Windows that I know of.

      Each project we have requires a specific tool chain version (python, elixir, ...) and specific versions of things like postgres. All dependencies are listed with some kind dependency definition file (pyproject.toml, package.json, mix.exs). If you bump a package it's done in the definition file as part of your changes and goes through CI for packaging and releasing. The rest of the team will get the new package version as soon as they pull your changes and run `just deps` or whatever. CI is the ultimate determining factor of whether your code actually "works".

      We also package and deploy with containers, but this isn't the real determining factor for any of the above.

      • lizknope 2 days ago
        That sounds great. I probably haven't asked my software friends about their setups in 5-10 years.
    • IsTom 2 days ago
      There's typically a list of dependencies of a specific project as a part of it, (hopefully) handled automatically by some part of build pipeline.
      • whatevaa 2 days ago
        Build pipelines don't have much to do with actual development. What, do you edit stuff but never build it locally?
        • IsTom 1 day ago
          I should have written "build process" instead I suppose.
  • nkko 1 day ago
    An alternative approach we're working on is Daytona (https://github.com/daytonaio/daytona) - it orchestrates dev environments on your infrastructure or even on your local machine. Oh, and it is Apache 2.0. The interesting bit is how it handles any IDE integration and various providers for running your dev envs on.

    Disclosure: I work on this project

  • sushidev 2 days ago
    Started using code-server on a remote server. Pretty good. Going to switch working like that. Its a bit like remote vscode only that the vscode ui is also remote and served via a web browser. Coming from intellij I was surprised that the user experience in terms of responsiveness is actually better with the remote setup. IntelliJ these days just lagging on everything.
  • tkiolp4 3 days ago
    Why does the author need a “remote ssh” plugin in their VSCode? I usually develop inside a VM as well, with my IDE running in the host… but what I do is to mount a shared directory for the code between the host and the VM. Works pretty fast.

    Don’t understand the need for Tailscale either. When I’m running services or dbs inside the VM, I can easily access them if needed from the host (either by IP or by the hostname I gave to the VM on start up)

    • kevingadd 3 days ago
      SSH remote in VS code has way better latency and performance characteristics than mounting a shared directory. Stuff like disk change monitoring also works a lot better.

      The one mixed/negative thing is that language servers will run inside the VM instead of the host where the editor is "running", which can defy your expectations. I find that a plus since language servers love to tie up multiple cores and eat up memory and having that happen inside the constrained VM environment stops my host system from getting bogged down.

      I used to edit in a shared mount before and moving to the vscode ssh remote model was a noticeable improvement. It's just faster.

      • kijin 2 days ago
        SSH remote absolutely rocks. It's the #1 reason I chose VS code instead of some other editor with an SFTP plugin.

        I'm working on a Windows host with a bunch of Linux VMs. Although I can share directories between the host and guests, I prefer to rely on SSH remote because I want to work in the VM's filesystem and its environment. For example, I don't want to care which version of python and what kind of libraries are installed on the host. The VM is supposed to be a container for all that stuff, and different projects have different requirements.

      • 1718627440 2 days ago
        What would prevent you from mounting your directory with ssh?
    • disintegrator 2 days ago
      As others have mentioned, I’ve not had great performance with shared folders and the SSH extension in VS Code is so damn good. Over time, you forget it’s even running because you open recent projects and it remembers which were local (on host) and which are on the guest and SSH’s in automatically.

      I could probably revise my use of Tailscale. My vague recollection is that I had networking issues when my laptop woke up and Tailscale didn’t have the same issues. Probably a debugging skill issue on my part.

    • askonomm 2 days ago
      So how does your IDE pick up on the tooling inside the VM? E.g if you build Python projects, how does it pick up the Python executable, .venv, etc? Or if PHP, then the PHP runtime, or if C then its stuff ... etc? If you install these on your host machine to make your IDE work well then I'm afraid that defeats the point of having a VM.
    • tomjen3 2 days ago
      So, VSCode has remote development, which any nerd would instantly think: "oh, that just means it copies files transparently", but it doesn't.

      It actually runs the code, including plugins you download from the internet. All your development tools, compilers, etc. are on the remote. And then you just have a blazing fast editor on the front end. It's really unique - you can use Tramp mode in Emacs, but it is extremely slow to copy back and forth. SSH into a remote server? let's just say 200 milliseconds lag when you're trying to input characters is not a good experience.

      The highest praise I can give Visual Studio Code is that remote development felt so much like local development that I wondered why it suddenly froze. Well, it turns out it's good, but it still can't deal with a network that's down. That was obvious in retrospect, however I hadn't thought of it at the time because I had completely forgotten I was doing remote development.

  • raihansaputra 2 days ago
    Are you installing the project dependencies on the VM directly or in a docker container? I'm curious how well docker on top of the Ubuntu vm works. Orbstack is great for personal use, but some companies don't want to pay for it, and this might be an alternative to have a better docker experience on macOS.
    • arkh 2 days ago
      > I'm curious how well docker on top of the Ubuntu vm works.

      Works well enough. Better than Docker on barebone MacOS if you have lot of file access in a volume.

      That's one of the thing which surprised me when I started using VMs to develop. First time was with a postgres backed app: I expected to lose performance when moving everything in a VM. But got the exact opposite result at the time. Postgres really liked the linux filesystem more than the windows one, enough to do more than offset the VM tax.

    • disintegrator 2 days ago
      I work directly in my guest os and clone my projects and run them in there. There are some projects that are driven through docker-compose and that works nicely. The one caveat is that I had to install `apt install binfmt-support qemu-user-static` so that docker can run x86_64 images on my arm64 VM.
    • UltraSane 2 days ago
      docker in a VM works fine because docker isn't a VM and just uses Linux features like pivot-root, namespaces, and cgroups to isolate programs. At least on x86 CPUs you can even do nested virtualization if the CPU supports it.
  • Too 2 days ago
    Any benefit of using a VM over docker container here? Since you seem to use the terminal only, without any graphical applications. Containers should be more lightweight and dockerfiles allow quick and reproducible changes to the guest OS.
    • kasey_junk 2 days ago
      Docker uses virtualization on Macs
  • mrbluecoat 3 days ago
    If you like NixOs and virtual development environments, perhaps try https://www.jetify.com/devbox or https://flox.dev/
  • tonymet 2 days ago
    Windows 11 and WSL manages this well . For those developing linux apps & containers using VS Code, you'll find the Windows 11 experience to be very good. You can code against WSL which offers the more popular distros, or use HyperV to run your own custom VMs.
    • makeitdouble 2 days ago
      In general, yes.

      One weird quirk: networking can be peculiar. Windows creates a magic bridging between the host and WSL, and as anything magic, it can break for specific use cases.

      VPN is one [0]: my WSL instances lose outgoing networking when connecting to our company VPN. There are workarounds but none are trivial.

      [0] https://superuser.com/questions/1715764/wsl2-has-no-connecti...

      • tonymet 2 days ago
        Good to know. My vpn works but I believe it’s wireguard-based. I wonder if yours is TUN/TAP or another driver
        • makeitdouble 2 days ago
          It seems to be TAP (layer 2 tunneling)

          Thinking about it, as the whole machine is under MDM (I only have the VPN on my dedicated work machine), there might be additional quirks that mess with the networking as well. Even bridging the Wi-fi to it was kind of a PITA.

      • k8sToGo 2 days ago
        This has been fixed for many months now as you can switch between different networking types for WSL.

        https://learn.microsoft.com/en-us/windows/wsl/networking#mir...

  • arkh 2 days ago
    > I’ve found developers frowning up Ubuntu and preaching for folks to use NixOS, Arch, Debian or other distros.

    My setup is mostly one VM per project group / online identity. Most of them using Ubuntu. The problem is when I want to work on an old project to check how it likes new technology I tend to stumble into the "you should have kept the OS up to date" problem. Ubuntu does not make it easy to upgrade if you miss more than a year of update.

    And even if you keep up to date, they tend to break things often (loved the X11 to weyland switch when working with screen capture libraries) so new VMs are using debian.

    • Asooka 2 days ago
      Don't you get the same problem with upgrades with Debian? As for Wayland, seeing the progress over the last 17 years, I estimate it will be ready for regular use sometime during the 41st millennium.
      • thequux 2 days ago
        With debian, is you develop on stable, you only need to do the upgrade song and dance every year and a half or so and upgrades rarely break anything. If you develop on unstable, you can use the snapshot archive to either upgrade 6 months at a time or move back to the next stable and then walk through stable releases.
      • arkh 2 days ago
        > Don't you get the same problem with upgrades with Debian?

        I don't think you have to do some manual configuration to upgrade your distribution because it is a couple years old and current scripts don't support that (like going from a 22.10 to 24.04 is a fun game).

  • TacticalCoder 2 days ago
    Curiosity: is anyone here developing inside a VM, with GPU passthrough, and with the monitor directly connected to the GPU used by the VM? (as in: showing the UI of the VM, without any need for RDP or the like)

    Such a setup works (I'd know for I have one at home doing just that but it's not my main PC) but how's it like to work like that?

    The GPU hooked to the hypervisor can either be on another monitor or on another input (in the latter case you'd "go" to the hypervisor by changing the monitor's input).

    • thehamkercat 2 days ago
      I have a 7800X3D with 64GB Ram, it's overkill for programming

      So I've installed proxmox on it to utilize it's resources for other stuff as well

      For my personal use, I created an Arch VM with GPU and USB PCIE passthrough, all 3 monitors directly connected to that GPU

      It's so seamless and fast that I don't even feel I'm working inside a virtual machine

      I have other headless VMs hosted to do other things (opensense etc)

    • TheTxT 2 days ago
      I used such a setup for gaming using a windows and a Linux (later proxmox) host. With everything passed through you basically don’t realize that you’re sitting in front of a VM, it’s great.
  • pjmlp 2 days ago
    At work, development with VMs has been a given since the early days of Amazon EC2 in 2010.

    Likewise when Windows 7 came out, I stop bothering with dual booting hassles and using VMare Workstation instead for whatever Linux.

    The exception being a netbook from the Asus Linux netbooks glory days, a price category nowadays replaced by tablets.

  • dsfsaff 3 days ago
    I have used a Virtual Box VM with a Ubuntu guest for years and it has worked great. It's as close to the VM's in prod you can get.
    • malux85 3 days ago
      I used to do that, but now that all of our microservices are dockerized every microservice has its own docker container

      Vscode supports remote containers, so everyone in the org just develops INSIDE a replica of the prod container

      All containers run remotely on enormous machines with 800+ GB of RAM and 8+ GPUs

      It’s trivial to share environments now because you just open the project and the dev container starts up and installs all the deps, devcontainer.json is just a few kb so just check it into git

      Engineering, DevOps, Data science all use this setup, push around your devcontainer.json and everyone gets the same GPU accelerated dev environment with near unlimited RAM and hundreds of CPU cores, none of this hardware is local so you can code on the balcony/beach on your MacBook Air, light and easy to travel with.

      We put VMs in the same country as our staff so latency has never been an issue

      This is the dev setup I’ve wanted for ages, and it’s a joy to use

      • bluehatbrit 2 days ago
        Do you have any more detail on how you're handling this on a shared host? My understanding is that the base remote containers + remote ssh extensions would require the code to exist directly on the remote host, and then the container to be created afterwards (and bind to the host directory etc). Is this what you're doing?
        • malux85 1 day ago
          Yes that's right, the process is:

          - clone the repo with git

          - Open Cursor (or vscode) locally, click "open remote window" little blue button at the bottom left of the screen

          - Navigate to the folder you cloned and press open

          - IDE will recognise that there's a .devcontainer inside the folder and pop a dialog saying "Re-open in container", click that. If this dialog doesn't show press "Shift+Ctrl+P" and type "Rebuild" and a menu option "Dev containers: Rebuild and re-open in container" will show up, select that.

          - The docker container will be built and you'll be dropped inside it, with the appropriate folder mounts happening automatically. Now when you open a terminal inside the IDE it's a terminal inside the container.

          • bluehatbrit 1 day ago
            Great thanks for the info, this is along the lines I was thinking. I'm guessing that means you also have dedicated user accounts on the shared host for each developer? I suppose that would be pretty easy to manage with some ansible.

            I've always liked the devcontainer approach, and in particular github codespace. But I've wanted to run it on hardware we can buy and manage. This approach sounds like it gets you 95% of the process, just missing a bit of the convenience around env per branch like codespaces can do. But that's hardly a problem really.

      • eikenberry 3 days ago
        What do devs who don't use VSCode do to work in this environment?
        • MawKKe 2 days ago
          VScode devcontainer can build from existing Dockerfile. You can develop the project image as usual, and then reference the Dockerfile from devcontainer.json. This means you can build and run from the command line via `docker` command if needed. The VScode extension just makes this slightly easier.

          Not sure how GP's company does it, but that is how I would configure it.

          Caveat: the default devcontainer initialization workflow does _not_ create the Dockerfile, only the .json.

          At $work we don't use devcontainer.json, but we can launch the development environment image such that you can SSH into it as if it was a regular VM.

        • malux85 2 days ago
          We only have one who hasn't made the switch, he works in vim inside the docker container.

          I dont actually mind what development environments our devs use, as long as your productivity is up and you get the job done and you are happy. You can use a magnetised needle for all I care, whatever makes you the best version of you.

        • anarwhal 2 days ago
          One option as mentioned in another comment is to use an editor inside the VM itself on the CLI. I've also tried mounting SSHFS directly which can work, though some inotify-type things don't always work.
  • jareklupinski 3 days ago
    is there a way to forward usb/serial ports from my local machine to the dev container?

    maintaining consistent firmware development environments using containers is a great idea, and current solutions involving proxying the compiled binary work well for flashing quickly, but switching back and forth between UART and Serial Debug is always more convenient when the IDE can handle it all

    • watermelon0 2 days ago
      You mean forwarding to VSCode dev container?

      If you are using Docker Engine directly on Linux, you can forward a device to a container via docker-compose `devices` setting.

      If you are using Docker Desktop (or similar), there is no native way. However, there are ways to share USB devices via network (USB/IP is an open source implementation of this), in which case you setup server on host device (can be macOS/Windows/Linux), and then run a client software inside a container.

    • nottorp 3 days ago
      I've done linux/arm yocto development from VMs. The best solution to pass usb/serial was still VMWare the last time I had to.
    • nneonneo 2 days ago
      VMWare has an excellent implementation of this which can selectively connect devices to the guest, and it properly remembers the action for each device you connect to your computer.
    • deivid 3 days ago
      Not sure what your host OS is, if Linux, QEMU can pass usb devices to the guest.

      Otherwise you could pipe serial over TCP

  • rcarmo 2 days ago
    I use Proxmox on my LAN and RDP connections to Linux and Windows desktops of various kinds. It’s great.
  • secondcoming 2 days ago
    If you're using a personal device for developing your employer's code always use a VM. You can just nuke the whole thing when your employment ends and not have to worry about any of their IP remaining on your machine.
  • amelius 3 days ago
    Beats developing inside a docker container.
    • cachvico 2 days ago
      Does it though? I've developed in remote VMs before and the advantages are clear, but having a fully containerized development environment is really nice too because you can tear the whole thing down and rebuild at the drop of a hat. You can achieve that with a VM and scripts, but a Dockerfile is very lightweight and standard.

      Edit: Unless you literally mean "editing code in a container with vi". In which case yes I'd go for the VM too!

      • Multicomp 2 days ago
        I am currently doing development on a VM with remote SSH, but I use the terminal on said VM to run a docker container when I need to actually run and build the thing, so it is possible to get both remote SSH tooling and containerization benefits, without needing to build a docker container and SSH into it from vs code, which might be what GP was saying.
      • amelius 2 days ago
        Well, if what you're developing is an editor, you'll be editing inside the docker container either way ...
  • mootoday 2 days ago
    Pretty sure you'd have a more lightweight experience with https://www.jetify.com/devbox.

    Happy to set it up and demo if you can share (or DM) a repo URL.

    • rcarmo 2 days ago
      That sets up a VM with Docker as well.
      • mootoday 16 hours ago
        No
        • rcarmo 12 hours ago
          You can't run Docker on a Mac without a VM.
  • pshirshov 2 days ago
    I use Nix for exactly the same purpose (dependency management for code generators), that's much more efficient and easier to maintain than VMs.
  • firesteelrain 2 days ago
    We have been developing code inside VMs for over 15 years at my company with various flavors of VMware on the backend and lately more moving to Azure. I assumed this was normal.

    We reach our VMs via VDI.

    • regularfry 2 days ago
      I have heard VDI described variously as "an abomination", "unusable except when you absolutely have no other option", and "don't". That might be down to the implementations in play at the time, though.

      I most often see this sort of thing where corporate IT can't stomach devs getting root on their own machines. It's a very specific sort of corporate dysfunction.

      • firesteelrain 2 days ago
        lol yep no root here

        But, I am in an airgapped environment that is tightly regulated if you get it.

  • urronglol 2 days ago
    Devcontainers running as non root. Trivial t set up. Don’t need to fanny around with a Vm
    • disintegrator 2 days ago
      I've tried devcontainers in the past and the performance compared to my current setup was pretty bad. This was a few years ago when it was known that filesystem-heavy workloads on Docker for Mac were sub-optimal. I remember having to define several bind mounts which improved the overall performance. I do intend to revisit this solution next time I need to set up a dev environment but rest assured there was nothing substantially more complex about a VM versus devcontainers.
      • bluehatbrit 2 days ago
        I use it at the moment and don't really find any noticable difference between running directly on my host, and in a dev container. If I were to measure the performance I'm sure there would be something, but it's not noticable in my development cycle.

        They also seem to be pushing it beyond vscode and into something which is editor agnostic. It's not quite there yet on that front, but I'm excited for it as I've been dabbling with other editors recently which don't support devcontainers directly and it always pulls me back to vscode.

        It's on a journey for sure, but I've had no performance issues when using it straight out of the box over the past year.

      • urronglol 2 days ago
        So why are you commenting on it now as if it hasn’t changed or evolved since then? Give it another try now. It is really good.
  • hrtk 2 days ago
    Have you tried Lima?
  • tonymet 2 days ago
    For the rest of you, don't be fooled by Darwin, it's a dusty BSD in Linux clothing