OBS merges WebRTC support

(github.com)

450 points | by Sean-Der 318 days ago

12 comments

  • Sean-Der 318 days ago
    WebRTC support has been merged into OBS. This is going to bring some really exciting new things to the space!

    * Serverless Streaming - WebRTC is P2P so you can video right into your browser. You don’t have to stand up a server anymore to stream for a small audience.

    * Sub-Second Latency - Create content and interact with viewers instantly. There is something magical about having a real conversation with your viewers.

    * Multitrack Input - Upload your transcodes instead of generating then server side. Give viewers multiple video tracks to see action from all sides.

    * Mobility - WebRTC lets you switch networks at any time. Go from WiFi -> Mobile with zero interruptions.

    Try it out today with Broadcast Box. A reference server implementation. https://github.com/glimesh/broadcast-box. The PR to add WebRTC to OBS https://github.com/obsproject/obs-studio/pull/7926

    -----

    So many people went into making this happen. This was a colossal undertaking that different people have working on for over 6 months.

    Sergio Murillo - Created WHIP. The only reason this could be added to OBS.

    Luke Strickland - Created Broadcast Box. The reference server we developed against

    Paul-Louis Ageneau - Creator of libdatachannel. The library used to add WebRTC support to OBS.

    Colin Edwards - Started the project to add WebRTC into OBS

    John Bradley, tt2468, pkv - Developers who worked on the code itself

    RytoEx, tytan652 - Lots of feedback and reviews

    -----

    Have fun using it! If you have any questions/feedback/improvement ideas I would love to hear.

    • ehsankia 317 days ago
      Woah, so if I understand this correctly, I can just put whatever streamkey i want in OBS with https://b.siobud.com/api/whip as my server, then my friend puts the same key on BroadcastBox hosted website and that's it? That's amazing. I do a lot of 1:1 streaming, sometimes with discord, sometimes with Twitch, but this is much better.
      • Sean-Der 317 days ago
        Yea exactly!

        https://b.siobud.com might not be always available. So best to host yourself for anything important :)

        Excited for you to use it. Send any ideas my way for improvement.

    • gloosx 317 days ago
      As a dev who works with WebRTC a lot, debunking some of these claims

      * Serverless Streaming - WebRTC is P2P so you can video right into your browser. You don’t have to stand up a server anymore to stream for a small audience.

      > Only possible in an ideal world, where NAT doesn't exist, in reality, you need to traverse NAT and it is NOT a serverless process.

      * Sub-Second Latency - Create content and interact with viewers instantly. There is something magical about having a real conversation with your viewers.

      > Only possible in an ideal world, where machines are very close to each other. In reality, you need expensive world-wide TURN clusters on top-notch infrastructure to ensure <100ms latencies you want.

      * Mobility - WebRTC lets you switch networks at any time. Go from WiFi -> Mobile with zero interruptions.

      > In fact, an interruption is happening when network conditions change – since you need to negotiate connectivity again. It is not seamless.

      • fulafel 317 days ago
        The app data only need to go via a server if you don't have IPv6 and your IPv4 NAT is in the minority of NAT boxes that breaks the usual P2P NAT workarounds[1] like UDP hole punching. (Firewalls can of course also be configured to block it but that's another kettle of fish)

        [1] https://www.cisco.com/c/en/us/support/docs/ip/network-addres...

        • drdaeman 317 days ago
          I haven't bothered to learn why, but if both broadcaster and viewer are behind a symmetric NAT, usual STUN tricks don't always seem to work in practice. At least they haven't worked for me.

          I have a DIY WebRTC forwarder for a RTSP stream, that lives on a home server behind a NAT. It has NAT because it lives in a cgroup that's isolated to IoT VLAN, and I haven't originally planned on WebRTC there, hoping I could make it work with restreaming over HTTP or Websockets. The NAT there is of the most common type: nftables' masquerade statement for the appropriate output interface, the usual conntrack allow established,related rule, and rule that allows outbound connections. For whatever reason, WebRTC only worked for me when my viewing device wasn't behind any other NATs.

          Now, this is not a proper argument. Being a very lazy ass I haven't bothered with diagnostics so I don't know why it hadn't worked. I was already quite frustrated with various cross-browser JS shenanigans and after checking that STUN was indeed configured but had not worked, I've just punched a couple of port forwards and called it a day.

          For whatever reason P2P streaming seem to work somewhat worse in practice than it should've been in theory. Usual computer leprechauns, I guess.

      • Sean-Der 317 days ago
        You can traverse NAT with PCP/NAT-PMP. Libdatachannel doesn’t support it, but I don’t see why it shouldn’t in the future!

        ——

        Yes <100ms is hard, I don’t think most users are looking for that. 400-600ms is what people can expect and I always see that!

        —-

        You don’t need to negotiate again with NICER

        • gloosx 316 days ago
          UPnP could be enabled on your NAT, but probably not, because it comes with certain security considerations. If you take most used networks globally – which are the cellular carrier networks, they won't let you do it. The point is about these techniques – you can't fully eliminate guesswork about whether specific NAT will be traversed or not for arbitrary peers, so the relay is mandatory to do it. If you're fine with only some part of the audience getting connected to your stream, and some not – you can go full P2P, otherwise the server-less claim is too bold.

          400-600 ms is certainly not a "real conversation" experience, yet getting close to it. Speaking more realistically, the latencies spread would be much more broad depending on the peers' geography in a true P2P mesh. So with a part of the audience closer to streamer, the conversation can become more real indeed, but for those further away, it will be increasingly more choppy.

          If you don't need to negotiate again, how do you know which new socket to send bytes to? Is it some kind of a wizard, this NICER?

    • eminence32 317 days ago
      Thanks for all your work getting this PR merged!

      I've been trying it out for a while now (streaming to a self-hosted broadcast-box), and almost everyone who views the stream says that the video will freeze for 1 to 2 seconds every now and again (every few minutes). Audio is uninterrupted. Any hints about how I can debug this?

      • Sean-Der 317 days ago
        Sorry about that!

        chrome://webrtc-internals should tell you the exact reason. My guess is packet loss and then it recovers on the keyframe.

        In the next day or two I am gonna open a PR to set NACK buffer properly. Right now it is too small :/

        Sorry you are hitting issues. Will try my best to fix it up! Email me sean@pion.ly and would love to make sure we get it working 100%. I bet other users are hitting the same issues and not sharing.

        • eminence32 315 days ago
          Thanks for your hint about chrome://webrtc-internals -- I'll see what that tells me!
      • doctorpangloss 317 days ago
        In my opinion, moderate your expectations of non-libwebrtc implementations like the one used in this feature.

        Here's some speculation: Most users will experience non-libwebrtc WebRTC bugs (like OBS users). 1% of users who experience bugs in OBS WebRTC will report enough telemetry to fix the bug. But 99% of all of OBS's userbase uses Chrome, Edge or Safari, and maybe 50% of those (more than enough) report telemetry. So if their WebRTC bug was hardware / network specific - the bugs that matter - libwebrtc will have observed it and by now, probably fixed it.

        Despite telemetry from hundreds of millions of users every day, it still took years for libwebrtc to reach truly robust, flawless stability on all the platforms it deploys on.

        It's a complex technology.

      • wisdow 308 days ago
        Been experiencing the same issue. I tried to figure what causes this but without success. If you have any idea i'm down to discuss about it.
    • viraptor 317 days ago
      > Serverless Streaming - WebRTC is P2P

      As I understand webrtc, it describes the connection wrapping and the streams / negotiation, but not the discovery part. Which means it's about as p2p as http - if you add a way to find the endpoint and open sockets, you can make peers talk.

      Is there something more in the OBS implementation that makes it more p2p? Am I missing something?

      • notatoad 317 days ago
        i think the point is that the streams themselves require a lot of bandwidth, so you can make the expensive part of the process peer-to-peer instead of having to route the whole thing through a server and pay both inbound and outbound bandwidth costs on the stream.

        it's not really meant to be a useful tool for making a pure peer-to-peer video service.

        • the8472 317 days ago
          But OBS is the server serving the stream to everyone. It's using WebRTC in a client-server fashion. There's nothing P2P about it.
        • viraptor 317 days ago
          Ok, so it's more of a "direct connection" than "peer to peer" scenario. That makes sense.
          • capableweb 317 days ago
            "Peer to peer" is "direct connect".

            Usually you'd divide what you're talking about into two different steps. Peer discovery, and peer transport.

            Discovery is how you find other peers, sometimes its via DHT, sometimes it's hard-coded, other times just random walk. But just like in WebRTC, (and in TCP), you need to get the address from somewhere out of band, so you can connect, usually something like a centralized directory. But the transport of data, is P2P, just like TCP. So once you've found the peer and connected to it, all data goes directly between you.

            Doesn't make it less/more P2P than any other P2P system out there.

            • viraptor 317 days ago
              I'm not sure I understand what you mean as p2p transport. What makes HTTP or IMAP not p2p in this description? (You find the peer out of band (DNS) and connect directly to the peer with the data.)
              • notatoad 317 days ago
                i'm pretty sure the parent is incorrect in describing TCP as a P2P protocol. TCP (like HTTP or IMAP) has strongly defined client and server roles, the two ends of the connection do not act as peers. P2P should mean that neither end of the connection is taking a specific role, there's no end that could really be classed as server or client.
          • jraph 317 days ago
            Doesn't P2P always require some kind of discovery mechanism unless peers know each others? For example for P2P file sharing with Torrent files, a central server (a tracker) that list the current peers? What is different between this and WebRTC? It's only partially decentralized but we still call this P2P, in both cases.
            • Sean-Der 317 days ago
              Yes WebRTC does require a discovery mechanism. That is where WHIP/Signaling comes into play.

              I have a chapter on signaling here https://webrtcforthecurious.com/docs/02-signaling/

              I have an example of WebRTC without signaling https://github.com/pion/offline-browser-communication. Each side has to agree on things ahead of time. Useful for security cameras/IoT/LAN stuff though!

              • eyegor 317 days ago
                Since you seem familiar with it, do you think it's possible to run a webrtc connection over http in a modern browser? I tried for a while to make this work but I gave up because it seemed like in js land it tries to use the getUserMedia apis which aren't allowed in an insecure context. The reason why I was trying was to stream audio + data locally from a device that emits its own wifi network (fully offline due to environment restrictions). It seems dumb to have to serve invalid self signed certs just to set up a webrtc ice handshake.
                • Sean-Der 317 days ago
                  Unfortunately no.

                  `localhost` is allowed. Could you reverse port forward the service to localhost? That will get around those restrictions.

                  • eyegor 317 days ago
                    The service/webserver is running on the device, and clients connect via let's say a cell phone to the devices wifi network. The client talks to the webserver via a wifi network from the device. I don't think there's any way to trick the client into thinking the device is localhost? Maybe some HSTS shenanigans?
                    • jhugo 317 days ago
                      Self signed certs would be much easier. Or a CA and certs signed by that; since it’s a controlled environment maybe you can install that CA to the trust store on the viewing devices.
                      • eyegor 317 days ago
                        Unfortunately I don't control all of the clients and I'm using self signed certs now. It's just a hassle having to explain to non cs folks how to click past the unsafe prompt which looks to them like an error.

                        Meanwhile I can hack together a poor man's webrtc by sending raw packets over websockets and playback works just fine. I just lose all the real benefits of webrtc (robustness, udp vs tcp, etc.). It feels like it should be possible to work around "secure context" nonsense since I don't access anything on the clients, they just play back a stream from the devices. But chrome/ff tends to block the ice handshake in http mode.

                        • jhugo 317 days ago
                          If you’re serving DNS to the clients too, and you’re able to push updates to these devices regularly enough somehow, you can also use real certs with names under some public DNS zone that you control. (Make it resolve to a public IP on the Internet so you can issue/renew the cert, and a private IP on the private network.)

                          But yeah, one way or another, you need a secure context for WebRTC to work. That’s in the spec, and browsers follow it.

                          Perhaps in your context it really is nonsense, but how is the browser to know that your network/physical layer is secure?

                        • linuxdude314 317 days ago
                          What prevents you from using an actual cert from a trusted CA?
            • the8472 317 days ago
              > Doesn't P2P always require some kind of discovery mechanism unless peers know each others?

              That discovery mechanism can be a decentralized network where any contact is an entrypoint. You connect to the network ~once and from that point onward everything is p2p and no fixed servers are involved or in control.

    • sitkack 317 days ago
      Can this do web RTC between servers? Can we construct a tree of OBS instances?

      The numbers are purely illustrative. I don’t know what OPS can actually do but let’s say each OBS server can handle 50 connections that means if we had a two level deep tree, we could handle 2500 participants in a single call.

      • Sean-Der 317 days ago
        I don't have an idea on how to do cascading OBS instances. I should write some software to make this easy though. I want to make 'self hosting' as easy as possible. I think https://github.com/glimesh/broadcast-box will fill that gap for now.

        For my own usage I run WebRTC servers and have them 'replicate' the video traffic. LiveKit did a write up on this with some good graphics https://blog.livekit.io/scaling-webrtc-with-distributed-mesh...

        • prox 317 days ago
          What I would like to see is a Twitch like experience but within your own space, aka what a website is for the web, and the hoster is nothing more than a facilitator. So you pay for the basic tech and maybe bandwidth but the rest is up to the streamer. Would be great to have a real alternative to Twitch that’s completely your own. People could still federate, but there is also a mainhub (broadcastbox.com or whatever)

          Anyway it’s super exciting stuff! Great work!

          • bluefirebrand 317 days ago
            I have been thinking about this a bit, kind of like a "what if you could set up a streaming website as easily as a wordpress blog"
            • Sean-Der 317 days ago
              That was my hope for Broadcast Box! Mind checking it out and telling me what you think?

              It has a docker image so you should be able to run a 'streaming site' with just a few commands :)

              • prox 317 days ago
                What if you create extra facilities on a unified website? I would pay for that. Maybe sub functions, chat functions and so on. You need that for more discoverability I think.
        • sitkack 317 days ago
          Thanks for the response, I’ll take a look. I really appreciate how much focus you put into this project. I think it does good for the world.
      • pavlov 317 days ago
        This is output only (more specifically over WHIP, WebRTC-HTTP Ingest Protocol).

        For incoming connections and compositing them you need something beyond OBS. Gstreamer is one option.

    • throwaway485 317 days ago
      > * Multitrack Input - Upload your transcodes instead of generating then server side. Give viewers multiple video tracks to see action from all sides.

      I've always wanted this. Instead of the streamer switching video inputs we could select from the viewer-side which perspective we want. I've also thought about things like NASCAR partnering with Valve/Steam to use their Valve Index for 360-degree views from each car on the track. I don't know why they're not marking VR to people who love NASCAR, it'd be such an odd and likely successful niche. It'd be cool to accept-in multiple video inputs and even patch them together in realtime on the viewer side (unless they're specifically disparate).

    • erlend_sh 317 days ago
      > You don’t have to stand up a server anymore to stream for a small audience.

      Now this is actually HUGE. It means Twitch has far less power as a gateway for greenfield streamers.

      • starttoaster 315 days ago
        Disagree. There is far more to Twitch than just "video goes here." There's the chat system for one, which this doesn't solve for. An important part of streaming is the social aspect of it to many, in the chatroom. It's also a way for your fans to communicate back to you. Sure they can open up your stream in a browser and chat with you in a Discord, but you're talking about requiring 2 tools being open as opposed to one. And that's not even the worst part: the subscription/follower system. The goal for any greenfield streamer is to gain a follower base. These followers and concurrent viewers are what allows you to get a "subscription" button for people to pay you. Otherwise you're asking for donations via Patreon or something similar? However, if you're getting your subscription money via Patreon, you're missing out on potential boat loads of money from Twitch Prime subscribers, where every Twitch user that also has an Amazon Prime subscription can send you one free subscription per month; a LOT of Twitch streamers make a lot of additional money this way, at no additional cost to the viewer.

        Things you're giving up as a greenfield streamer by rolling their own streaming platform:

        * The beginning of a follower base on your destination platform (Twitch)

        * A "single pane of glass" for viewing your stream, chatting with you and other viewers, and giving you subscription/donation money. This is a deal breaker for a non-negligible number of people, and your follower base might not grow as large or as quickly as you'd like.

        * Twitch Prime subscription money.

    • felipellrocha 317 days ago
      > Sub-Second Latency - Create content and interact with viewers instantly. There is something magical about having a real conversation with your viewers.

      How is this possible?

      • Sean-Der 317 days ago
        It is pretty easy to get a one way trip time for packets that is sub-second! You see it with conferencing and other real-time communication things.

        If you are curious on the 'how' of WebRTC I wrote a Free/Open Source book that goes into the details https://webrtcforthecurious.com/. Happy to answer any particular questions you have.

        • felipellrocha 317 days ago
          Man... I've been up all night, traveling all day, trying to adjust to the new timezone. Boy, I swear I read "sub-millisecond". Lol. Makes sense :) Thanks for the resource!
        • beebeepka 317 days ago
          Bookmarked.

          I added WebRTC support (audio, video, chat) to my already existing application a couple of years ago and felt the technology wasn't exactly ready for primetime. To me, it felt like the out of the box developer experience wasn't exactly great.

          I am only saying this because I got to see exactly how much effort is required to get things going. Your work is greatly appreciated

        • evv 317 days ago
          You are a legend.
      • gruez 317 days ago
        Why shouldn't it be possible? Screen share on a Zoom/Teams meeting is basically a stream with sub-second latency.
      • hsudhduhsh 317 days ago
        most connections today are subsecond... a second is a lot of time.

        also this is probably only taking California as the whole world, like everyone does.

        • imtringued 317 days ago
          I have less latency to California than most Bluetooth headsets have to your ears.
      • Karrot_Kream 317 days ago
        WebRTC attempts to set up a direct connection between the streamer and the receiver so packets are being sent over directly. This doesn't always work out, say if there's bad connectivity (e.g. NAT) somewhere so the connection uses a TURN server or if there's a conference call situation where an SFU is involved, but usually it works.
        • skykooler 317 days ago
          I ran into a situation yesterday where it doesn’t even work between two devices on the same wifi network. It very much depends on the particulars of your connection.
        • xen2xen1 317 days ago
          And easy to find for DOS or DDOS. Doesn't seem like a benifit.
        • andybak 317 days ago
          SFU?
          • Karrot_Kream 317 days ago
            SFUs, or Selective Forwarding Units, receive media streams from multiple participants but only forward a subset of those streams to certain participants. For example, if you're in a Webinar of 20 people, instead of opening up direct connections to all 19 other participants (and having every other participant do so as well), you can open up a direct connection to the presenter and an SFU which will send you an aggregated, downsampled stream of all the other participants. It reduces direct connections across large calls and saves on processing power across all participants by aggregation.
            • ShadowBanThis01 316 days ago
              Then there’s STFU, which mutes all participants to save data on audio.
      • numpad0 317 days ago
        It just means the overhead is smaller for the feature set it offers. It’s not faster than bare MJPEG over UDP, but not way slower than that either.
    • EGreg 317 days ago
      Traveling in Europe currently, but got curious.

      How many viewers can it support? RTMP usually goes to platforms that broadcast to many users. What about with WebRTC, can I have 100 viewers peer to peer? Maybe with TURN relays?

      • jeroenhd 317 days ago
        Practically: however many your computer and uplink can serve. OBS needs to be tested, but this browser to browser test may give some indication: https://tensorworks.com.au/blog/webrtc-stream-limits-investi....

        If you've got 100 viewers and can manage to saturate a gigabit uplink (which can be difficult because of routing and other upstream crap) you should be able to send about 9mbps video + overhead, which is pretty close to Twitch's bandwidth cap.

        Because consumer ISPs generally don't have great routes between them and sustained gigabit uploads probably don't always work reliably because of overbooking with consumer lines, you'll probably be better off setting up an SFU on a cheap (temporary) cloud server somewhere.

        Theoretically: media streams are identified by a 32 bit number, so about 4 billion clients.

        Data streams are limited to 65535 endpoints (a signed 16 bit number with a reserved 0).

        • martinald 317 days ago
          Because consumer ISPs generally don't have great routes between them and sustained gigabit uploads probably don't always work reliably because of overbooking with consumer lines, you'll probably be better off setting up an SFU on a cheap (temporary) cloud server somewhere.

          Not sure why you think that? Maybe on cable modems, because DOCSIS is highly asymmetrical, but on FTTH GPON et al (pretty much the only technology which supports gigabit upload; docsis gigabit upload is extremely rare), there is no reason it couldn't saturate upstream at 1gigabit+. If anything, consumer ISPs are usually way more contended on the downlink side than the upstream side.

          • jrockway 317 days ago
            GPON is typically shared bandwidth; 2.5Gbps for n customers (that could be an entire building, or the 4 houses adjacent to a box in the backyard, etc.)

            More advanced fiber networks split bandwidth based on frequency ("color") instead of giving each ONT a time slot based on the current network capacity. But basically, if you're uploading at 1Gbps, that is fine until 1.5 more neighbors decide to do the same thing. It's rare. When I worked at Google Fiber we kept an eye on PON utilization and there was never a case where a customer couldn't get the requested bandwidth. We even had customers that maxed out 1Gbps/1Gbps 24/7 in an attempt to get banned or something, but of course we wouldn't ban you for that. We did worry about PON capacity and had some tech ready to go in case we ever needed to switch to truly dedicated per-customer gigabit connections.

            At another ISP I worked at, our marketing material described the 1Gbps GPON-based link as "dedicated", but many customers noticed it was shared. It would slow down during peak business hours, and then get fast when other people in the building went home from work. We removed the term "dedicated" from our marketing. A political battle that I paid dearly for, but hey, they still exist and didn't get sued into oblivion by the FTC, so that's nice. (We also sold frequency-multiplexed 10Gbps circuits that really were dedicated... I suppose until our uplink to our ISP was saturated. I think we had 100Gbps transit and that never happened. But yeah, there is no such thing as dedicated unless you are physically plugged into the network that you want to reach. The Internet tries its best.)

            • martinald 317 days ago
              I know that GPON is shared; but as you say it's rarely the 'weak link'. Regardless, even if it was, it's much more likely that the downstream would be saturated on the gpon level vs the upstream - consumer ISPs are much more downstream heavy than upstream heavy, so I have no idea what the parent post means when he says that its hard to saturated gigabit uplink on consumer connections. It's the opposite if anything.
          • numpad0 317 days ago
            It’s not about bandwidth inside the AS, but about fiber peering from ISP datacenters “to the Internet”. I don’t have anyone’s internal informations but that must cost 10-100x more than $35/Gbps/month. Residential internet is gym membership model, a best effort offering. Resources are always heavily over-committed based on typical use cases of groups of average users.
          • jhugo 317 days ago
            “consumer ISPs generally don't have great routes between them” is the way more important part.

            An eyeball network will generally have considerably more, and considerably higher bandwidth, peering relationships with content provider networks than with other eyeball networks.

            At AVStack we experimented quite a bit with P2P multiparty video calls, it’s certainly possible but as you scale up the number of participants there’s huge variance in quality depending on the combination of ISPs used by the participants, the time of day, network conditions etc.

          • bluefirebrand 317 days ago
            > Maybe on cable modems

            Maybe I'm wrong but I'm pretty sure cable modems are still wha the vast majority of consumer households have.

            • martinald 317 days ago
              Yes but the parent is referring to gigabit upstream connections. Very few gigabit upstream services are over DOCSIS. I don't know of a single provider that offers gigabit upstream on docsis. In the UK Virgin Media offer 100mbit/sec up on their top tier (1gigabit down). I think the fastest Comcast goes in the US is 200mbit/sec up over docsis and that's only in certain areas.
        • weinzierl 317 days ago
          What do you mean by Twitch's bandwidth cap? Obviously Twitch supports more than 100 viewers, so is this an upstream cap from the streamer or a downstream per user cap or something different all together.

          A glace over Twitch's broadcasting guidelines was not enlightening, but I'm clueless in these matters.

          • ripdog 317 days ago
            He means the cap on the bitrate of the video that twitch will ingest. Twitch will then send this video to any number of viewers, as you say.
          • jeroenhd 315 days ago
            If you send more than about 6.75mbps of video to Twitch, you'll start getting errors. With the variable size of encoded streams you need to play with the cap a little to get a problem-free experience.
      • charcircuit 317 days ago
        That's an apples to oranges comparison. With proper infrastructure both can scale to many users. For streaming to many people peer to peer will be inefficient, so I would recommend not doing that.
        • EGreg 317 days ago
          How can OBS with the new WebRTC and WHIP+WHEP be used to stream to thousands of people at once, without using RTMP?

          And what is the proper infrastructure to scale WebRTC to thousands of listeners at once?

          • fidotron 317 days ago
            In practice this is what SFUs are for. You would simulcast a set of different encodings from OBS to the SFU and that would fan out to other SFUs and clients as appropriate until the desired scale is reached. This does get expensive.

            If you don’t actually need the low latency then you are better off with hls-ll delivered via a CDN.

            • EGreg 317 days ago
              If I'm ingesting and decrypting the stream anyway, wouldn't it be better to build an MCU, to send out one stream and adapt the resolution for each client?

              https://www.digitalsamba.com/blog/p2p-sfu-and-mcu-webrtc-arc...

              • fidotron 317 days ago
                Cost. SFUs don’t have to transcode the video, just pass buffers around and ensure both ends agree on the format. That is an enormously lighter process.

                Personally I think simulcast (sending multiple differently encoded streams) is almost always wrong for many Webrtc uses (especially mobile publishers) but in the OBS case it actually makes a lot more sense.

                • EGreg 317 days ago
                  How does WebRTC know what resolution to send? Or does it always send the original resolution?
                  • fidotron 317 days ago
                    That all depends on how you set it up. Generally speaking a lot of WebRTC is about how the connection negotiates the parameters of the delivery of the different media, including codecs, resolution, etc. One track can be delivered over a connection in multiple different ways at once as well.

                    If you want to know more you're probably best off going through the MDN docs about the WebRTC browser API and learning from that.

          • charcircuit 317 days ago
            Cloudflare Stream has beta support for webrtc ingestion and playback. If you did more research you may find other services or projects for scaling webrtc streaming to handle many viewers and handling transcoding to lower quality for people without enough bandwidth.

            >And what is the proper infrastructure to scale WebRTC to thousands of listeners at once?

            It looks like a pool of ingestion servers and a CDN for viewers. Ingestion servers handle getting the stream from the streamer and encoding it into whatever formats are needed. The CDN handles distributing this stream to people who want to watch it.

          • Sean-Der 317 days ago
            I have talked with companies that are already doing 100k+. I don't have the exact numbers but Millicast and Phenix both do very large audiences.
            • EGreg 317 days ago
              How are they doing it is the question? What's the set up?
          • grogenaut 317 days ago
            How can a webserver serve 10k people at once?
  • password4321 317 days ago
    Related (somewhat):

    WebRTC support being added to FFmpeg

    https://news.ycombinator.com/item?id=36130191

    10 days ago

    • sylware 317 days ago
      Good news for my custom, plain and simple, C coded media player based on ffmpeg. I'll be able to view such streams, but...

      ... how I can be a contributor of such p2p stream without a big tech web engine? (I use only noscript/basic (x)html browsers).

      I could setup a "node" (should be plain and simple C coded) on my desktop computer, then from this node, I would extract the stream itself and watch it locally (I have a 800MBits uplink, and I know what IP diffserv with its ethernet translation is).

      If the specs/protocol stack are reasonable and not too much convoluted, I would even be able to code such node (based on ffmpeg, very probably).

      AND... if this p2p network is actually good and works IRL, do you really think the CDNs and streaming platforms will let it exist without a little bit of DDOS-ing...?

  • nubinetwork 317 days ago
    I wonder if this works for input, or if it's output only. I experimented with using webrtc for speedrunning races, but ran into issues because I didn't know how to interactively crop the inputs in order to stitch them back together and stream the output to twitch.
    • Sean-Der 317 days ago
      It is output only (for now).

      I will be adding input also. Making it easier for people to build streams like Twitch's Guest Star[0]. Just implementing the WHIP/WHEP, then we are going to see if we can add some batteries/better experience that is OBS specific.

      [0] https://help.twitch.tv/s/article/guest-star?language=en_US

    • grogenaut 317 days ago
      I'm betting it's output only. But obs cash take many many sources including a browser window that is in memory. You can put a player in one of those hooked to the stream, like you can render twitch chat
  • rektide 317 days ago
    Yet another submission where WHIP & the WebRTC are conflated. This is due using http to stream video to another service which can actually do WebRTC.

    Still excellent still a huge win, but also a pretty big distion. Users still need considerably more to actually get online.

    Same conflation happened for ffmpeg getting WHIP. https://news.ycombinator.com/item?id=36130191

    • Sean-Der 317 days ago
      Sorry I might be confused, but `This is due using http to stream video to another service which can actually do WebRTC` I don’t think is right.

      WHIP is WebRTC signaling only. It just standardizes using a HTTP POST to exchange Offer/Answer. No media flows over HTTP.

  • mikece 317 days ago
    Will this support OBS being the conferencing/recording point such that one can have multiple video remotes in a live-streamed video (like StreamYard but open source and not browser-based)?
    • pavlov 317 days ago
      Not by itself. This adds the option of streaming output using WHIP, the WebRTC-HTTP Ingest Protocol [1].

      [1] https://www.ietf.org/archive/id/draft-ietf-wish-whip-01.html

    • Sean-Der 317 days ago
      I hope so soon! I now prefer using OBS vs Chrome screenshare because the quality is so much better.

      Conferencing providers just need to add WHIP support. It is trivial to add.

      • mikece 317 days ago
        > Conferencing providers just need to add WHIP support. It is trivial to add.

        That would be awesome for consumers... but horrible for the business model of conferencing providers which is why it won't happen.

      • aftbit 317 days ago
        Has anyone started an issue or PR for Jitsi? We use them exclusively for work. I'd love to be able to use OBS with work streams without awkward virtual webcam hacks.
  • seydor 317 days ago
    webrtc is such a complex technology still after so many years. I wonder why web standards didnt decide to just deprecate old tech and create something that is usable out of the box without stun/turn servers and all the other complexity that i don't even know. In the year 2001 it was much simpler to set up an all-to-many video chat with Flash and an open source media server. it feels like this tech has been chasing its tail
    • crazygringo 317 days ago
      It has nothing to do with web standards, or the web at all.

      Back in 2001 your computer was probably connected directly to the internet over Ethernet with a dedicated IP address so the simple setup worked fine.

      But then Wi-Fi came out and people had routers at home and NAT became the norm. This forced STUN/TURN.

      It's entirely a networking issue, not a web issue. IPv6 could have made NAT a thing of the past, but obviously it hasn't. But firewalls are also the norm now in a way they weren't back in 2001 because security has to be tighter now.

      • sylware 317 days ago
        As far as I know, many IPv4 internet gateways, if not nearly all, have gateway UPNP support.

        The issue is the browser unable to deal with gateway UPNP... well since the web engines are now real cluster f*ks of everything, they could add a DOM interface for it.

        • crazygringo 317 days ago
          The main problem is that malware can leverage UPNP to do real harm. A smaller secondary problem is that UPNP doesn't work well on large corporate networks (something about the "chattiness" of so many messages).

          Browser permissions to open up a port through UPNP is the "easy" part. The hard part is how your router will distinguish that as a legitimate request, versus local botnet malware opening up a request. There are some extensions around UPNP to add user authentication as part of the process, but they don't seem to be widely adopted. To the contrary, general security recommendations are usually to disable UPNP.

          I had thought that videoconferencing would indeed finally be the impetus to solve this in a secure way, but meetings with much more than 2 participants require a centralized server anyways to prevent bandwidth from exploding, so relaying video calls through a server just became standard.

          • sylware 317 days ago
            We are talking about domestic usage, not corporate, and in my country nearly all domestic internet users have ipv6, hence no NAT or similar abomination. It is even bigger: major mobile internet provider provides ipv6 by default.

            It has been like that for years. We are talking tens of millions of people with native IPv6, mobile and domestic.

            NAT is something which should not have existed in the first place.

            • aragilar 317 days ago
              Not here, and the majority of ISPs have no real interest in IPv6 unfortunately.
              • sylware 317 days ago
                Above all, it means IPv6 or IPv4/UPNP(gateway) are very same same.
                • crazygringo 317 days ago
                  With the security risks of both as well.
                  • sylware 317 days ago
                    The moment you are online, you are done for. Security does not exist, on HN we all know that. What worries me is the following:

                    how "convenient" this "security risk" is in favor of big tech centralized services... yes, how convenient...

                    Guess what: I will gladely take "this security risk" on top of the already tons we already have, that, to open the gate to indenpendence from big tech centralized services.

                    And as I said, in my country, it has been years with tens of millions of people on ipv6, mobile and domestic.

                    "Security" is just police in the digital world. Police is a permanent process. Without digital police, we'll get digital anarchy.

                    • crazygringo 316 days ago
                      Security isn't binary, on HN we all know that. Firewalls prevent a lot of attacks, and they have nothing whatsoever to do with police departments.

                      You're free to not use a firewall but you'll certainly understand why most people aren't going to follow your example.

                      • sylware 316 days ago
                        Security is a process, not a deliverable in the digital world.

                        You cannot ask domestic users to deal with a firewall. It's beyond them, this is quite unreasonable actually. Most people don't even know what is a firewall.

                        We are talking about a digital freedom space which has to be protected by digital police or we'll end up with digital arnachy and in the filthy hands of the digital mafia which is big tech (seems to be already the case though). The other way around is digital dictatorship, which is no better.

                        And as I said, it is much better to go p2p and federated than to be jailed in big tech centralized services.

                        But I don't see big tech to let that happen, they'll probably hire some hackers in order to sabotage it. firewalls would actually serve them well: anything beyond centralized services would be "blocked". As I said, convenient, very convenient... way too much actually.

                        In my country tens of millions of people have IPv6, not to mention that gateway UPNP for IPv4 is almost everywhere too, then ten of millions of people have been "following my example" for years. Of course, without an efficient digital police, this will turn to an omega sh*t show.

                        People who work for centralized services which could be threaten by p2p protocols are more likely to try to scare away people from them.

            • crazygringo 317 days ago
              > We are talking about domestic usage, not corporate

              No we're not, nobody said that.

              > NAT is something which should not have existed in the first place.

              It was an absolute necessity when it was introduced, since there are less IPv4 addresses than there are people on earth.

    • mort96 317 days ago
      STUN and TURN isn't unnecessary complexity. They're absolutely essential due to NAT. You need something like STUN to be able to communicate peer to peer through NAT, and you need something like TURN to tunnel traffic through a server in the case where NAT hole punching is impossible.

      Maaybe there's some unnecessary incidental complexity in the way the STUN and TURN protocols work? I don't know, I honestly haven't investigated the details of the protocols. But the actually problematic complexity comes from the fact that something like STUN and TURN is needed at all, and that's essential complexity which arises from the problem domain.

      • vxNsr 317 days ago
        I could be wrong but I think GP is arguing for dropping ipv4 to avoid all the complexity that comes from still supporting it.
    • torginus 317 days ago
      The issue comes from the technology using UDP which creates all sorts of funky networking situations when NAT is involved necessitating turn/stun.

      I think that will be unavoidable unless HTTP3/QUIC or IPv6 gains widespread support on all kinds of network infrastructure.

      • mort96 317 days ago
        TCP makes NAT traversal harder, not easier. WebRTC can be peer to peer thanks to its use of UDP, not in spite of it. The complications of STUN comes from NAT, not from UDP. The complications of TURN comes from firewalls (or overly restrictive (i.e bad) NATs), not from UDP.
      • sylware 317 days ago
        That's why they should have eaten the bullet: ipv6 and tcp, only.
        • predictabl3 317 days ago
          Except they wanted to build something usable? If I could snap my fingers and make IPv6 deployment and usability, I and I'm sure WebRTC folks would. This entire stance is even more hard to mesh with the real world given how long WebRTC has been in development and use.
          • sylware 317 days ago
            Well, the idea behind would be to move to a clean ipv6/tcp protocol stack at the same pace than ipv6 deployment.

            The best compromise would have been a dual stack: the clean pure p2p ipv6/tcp, and the brain f*cked ipv4/nat/upnp/turn/stun/etc. To have a clean and brutal split between the 2.

            In my country, people would mostly run the clean ipv6/tcp protocol stack. And I am thinking REALLY simple: no transcode in the protocol (only the node will decide if it needs to transcode), one stream, buffered (it's not a video call). Yeah... excrutiatingly simple, then with many alternative implementations, aka sane.

            But, it is kind of a dream, I know CDNs and streaming services won't let that happen in any way, I would not be surprised to see some DDOS-ing "from" them to keep people slaves of their big tech "centralized" services.

    • capableweb 317 days ago
      So, what are you suggesting to do instead of STUN/TURN, in order to solve the issue that STUN/TURN is designed to solve?

      It sounds to me, that you don't know what problem is being solved, so therefore you don't understand why the solution looks like it looks like.

  • at_a_remove 317 days ago
    I wonder if there's a handy client and, if so, how sophisticated it might be.
  • sylware 317 days ago
    Cannot wait to see the "doc" independent of all streaming platforms...

    :)

  • sharts 317 days ago
    Nice
  • Eruntalon 317 days ago
    [flagged]
  • winrid 317 days ago
    Nice, but curious, why use char* so much in a c++ project?
    • mackal 317 days ago
      Most of the c string usage appears to be just for libobs, which is a C library. They have a few constants that are perfectly fine and the rest of their string usage is actually std::string ...
  • prahladyeri 317 days ago
    If only OBS was somewhat lighter, I'd have used it. You need an Intel i5 or Ryzen 1300x processor for a "minimum requirements", imagine what would it take for a decent performance!

    Instead, I use another app called ShareX [1] which is much lighter on the OS and processor. It may not have all the features but you can easily create a screencast or recording session with ease.

    [1]: https://github.com/ShareX/ShareX

    • piperswe 317 days ago
      The Ryzen 3 1300X is a low-end, nearly 6-year-old CPU. The Core i5 2500K is a mid-range, _12-year-old_ CPU. It's a perfectly reasonable minimum requirement.
    • andybak 317 days ago
      I use ShareX and I use OBS - there doesn't seem to be a huge overlap in functionality?

      ShareX - screen capture and screen recording.

      OBS - streaming (and lots of related functionality).

      I'm guessing you were using OBS for screen recording - but that's not really close to it's core feature set.

      • ehsankia 317 days ago
        A lot of people use OBS for screen recording. I do think ShareX is extremely limited unless you want to record something super simple.

        OBS' ability to compose a whole scene, dozens of different capture methods, overlays, full audio mixer, transitions, etc. It's the perfect place to record any sort of desktop-based video. It's far far more than just "screen recording".

      • Shorel 317 days ago
        I don't see anything wrong with using OBS for screen recording.

        I use it for streaming as well, but having it installed, I also use it for recording when I need it. Also for video capture from the webcam.

        OBS can do no wrong to me.

        • andybak 317 days ago
          Yeah - I guess I hadn't really considered it for that task. I've only used for live stuff personally and I hadn't made the connection with screen recording per se.
      • hsudhduhsh 317 days ago
        i use obs exclusively for your sharex use case.
        • imtringued 317 days ago
          I use the gnome screenshot widget.
    • jeroenhd 317 days ago
      I find OBS not to be that heavy as long as your GPU does all the work. A modern i3 would probably do fine if it has QuickSync enabled and not too many extra filters and inputs at the same time.

      ShareX doesn't live stream as far as I know? Let alone offer WebRTC?

    • predictabl3 317 days ago
      Sean recently posted a link to an ffmpeg fork where this work is being done too. Gstreamer already has a WHIP module that flew under my radar. So there should be plenty of good options soon.
    • paulryanrogers 317 days ago
      Windows only. Still, good to have alternatives. OBS is nice in that learning it means you can use it on 3 of the biggest platforms, even if performance isn't top notch.
    • lucideer 317 days ago
      I haven't used ShareX so I may be missing something here, but the website seems to indicate it's a screen capture utility. Does it do more?

      OBS is not a screen capture utility.

      • CharlesW 317 days ago
        > OBS is not a screen capture utility.

        I've been looking for a way to record my screen and webcam as separate files and so far OBS¹ seems like it might be the only tool that can do that. Is that not a good use for OBS?

        ¹ https://obsproject.com/forum/resources/source-record.1285/)

        • circuit10 317 days ago
          I guess they mean not just a screen capture utility
        • lucideer 317 days ago
          Sorry I should've been clearer: it can certainly be used to capture your screen. It's just an extremely ancillary subfeature.
      • gsich 317 days ago
        It is.
        • lucideer 317 days ago
          It can do screen capture. That doesn't make it a screen capture utility. In the same way as Visual Studio is not a note-taking app, blender3d is not a video editor and Excel is not an IDE.
          • nyanpasu64 317 days ago
            Sadly people recommend OBS for regular screen capture on Wayland, after X11-based capture apps like SimpleScreenRecorder no longer work on Wayland. And OBS isn't very good at screen recording, and even cropping the recording by dragging a specific region, then shrinking the output file size to this region, cannot be done easily (alt-dragging the bounding box followed by "Resize output (source size)" picks the uncropped source size, and the "Crop/Pad" filter doesn't allow dragging a screen region).

            In fact the issue was closed without understanding what the reporter was asking: https://github.com/obsproject/obs-studio/issues/8822

            • Shorel 317 days ago
              OBS doesn't work for me on Wayland either.

              I log in into X11 when I use OBS on Ubuntu.

              • chmod775 317 days ago
                It will work under wayland if you have xdg-desktop-portal/xdg-desktop-portal-wlr set up.

                If you've got that set up up correctly, screen sharing will also work in Firefox (for instance on discord).

                As far as I understand it, xdg-desktop-portal is a DE/WM agnostic protocol that enables applications to easily capture a screen - the user just has to run the right backend for their environment. I think it does other stuff too, but screen recording is probably the main use case.

                I'm using Manjaro Sway Edition where that was configured out of the box.

                https://wiki.archlinux.org/title/XDG_Desktop_Portal

          • gsich 317 days ago
            OBS is the screen capture tool. Unless you define screen capture somewhat differently? Nearly everyone on Twitch (or other streaming sites) is using it.
    • haunter 317 days ago
      Use the GPU encoder not x264. x264 is very CPU heavy, even on veryfast.
    • mikece 317 days ago
      As fast as processors are these days I don't see this as a valid complaint (especially with how inexpensive machines powered by the Apple M1 are these days).
    • pwython 317 days ago
      I've been using OBS since it first came out (11 years ago?) on just a 2012 MacBook. Never had a single issue.
    • DJBunnies 317 days ago
      Post your specs you dinosaur
    • prmoustache 317 days ago
      obs works well for me on a decade old machine.