I was thinking most people nowaday have at least 30mbps upload and a 1080p stream only needs ~10mbps and 720p needs ~5ish. Also i think it wouldnt have to be live, people would definitely not mind some amount of lag. I was thinking the big O for packets propagating out in the network should be Log(N) since if a master is sharing the content then is connected to 10 slaves, then those connected to 10 other slaves and so on.
The other limitation I could think of is prioritizing who gets the packets first since there's a lot of people with 1gbs connections or >10mbps connections. Also deprioritizing leechers to keep it from degrading the stream.
Does anyone have knowledge on why it isn't a thing still though? it's super easy to find streams on websites but they're all 360p or barely load. I saw the original creator of bittorrent was creating something like this over 10 years ago and seems to be a dead project. Also this is ignoring the huge time commitment it would take to program something like this. I want to know if this is technically possible to have streams of lets say 100,000 people and why or why not.
Just some thoughts, thanks in advance!
If you want live high quality streaming, a lot of reasons bit torrent works so well goes away.
Latency matters. In bit torrent if the peer goes away, no big deal, just try again in 5 minutes with another peer, you are downloading in random order, who cares if one piecs is delayed 5 minutes. In a live stream your app is broken if it cuts out for 5 minutes.
In bit torrent, everyone can divide the work - clients try to download the part of the file the least number of people have, quickly rare parts of the file spread everywhere. In streaming everyone needs the same piece at the same time.
Bit torrent punishes people who dont contribute by deprioritizing sending stuff to peers that freeride. It can do this on the individual level. In a p2p streaming setup, you probably have some peers getting the feed, and then sending it to other peers. The relationship isnt reciperocal so its harder to punish freeriders as you can't at the local level know if the peer you are sending data to is pushing data to the other nodes it is supposed to or not.
I'm sure some of these have work arounds, but they are hard problems that aren't really satisfactorily solved
> Latency matters. In bit torrent if the peer goes away, no big deal, just try again in 5 minutes with another peer, you are downloading in random order, who cares if one piecs is delayed 5 minutes. In a live stream your app is broken if it cuts out for 5 minutes.
First of all, BitTorrent clients do not download in random order or wait 5 minutes. They usually download the rarest block first, but can do whatever they want, whenever they want.
Second, standard HLS sets a nominal segment size of 6 seconds (some implementations will go as high as 10 seconds), and a client will usually cache multiple segments before playing (e.g., 3). This mean that you have 18 seconds before a segment becomes critical.
This is not a difficult thing for a P2P network to handle. You'd adapt things to introduce timing information and manage number of hops, but each client can maintain a connection to a number of other clients and have sufficient capacity to fill a segment if a connection fails. Various strategies could be used to distribute load while avoiding latency penalties.
Low-latency HLS uses much smaller segments and would be more demanding, but isn't impossible to manage.
> BitTorrent punishes people who dont contribute
Private communities punish this behavior, BitTorrent clients do not. Most new downloads will appear as freeriders for a long time, and only over long periods far exceeding the download time will enough compatible seeding opportunities arise for them to contribute in any substantial way.
The network does not need everyone to seed, it only needs enough people to seed.
The problem here is that BT works so well because the clients default to "good behavior" (prioritize rare pieces first) and discourages "bad behavior" (leeching/no upload).
This tilts the balance on the whole enough to maintain the health of the network. If you change these, you'd need to introduce other mechanisms to preserve the network incentives.
This is the real key point here. In the P2P wars, only bittorrent was the winner (though the old networks still live on and you can find interesting stuff in them...) - the timeless lesson is leeches need to be getting the short end of the stick, people who give back, the prize bonus. It's a fundamental characteristic of human nature here, tragedy of the common leech or something like that
In case of bring-your-own-client, the incentives are exactly the same: Clients would likely default to good behavior as network health equals user experience, and exactly like BitTorrent there will be neither punishment nor need for it if some clients disobey.
"Punishment" (tit-for-tat algorithm) is one of the defining features of bit torrent, especially in comparison to what came before it.
The original spec for the original client allocated a portion of its bandwidth to random peers instead of known-good/preferred peers, so if you had no chunks you were basically bandwidth and/or peer restricted.
If you take the arch linux ISO right now and put it into aria2c to be a new, unknown client with no data to offer, you'll find that while it takes a few seconds to join the network, fetch metadata and connect to peers, you'll quickly saturate your connection completely without ever uploading a single byte.
If you wanted, a streaming network could use direct access or low-hop access as seeding incentive - seed to get slightly lower content latency. When the streaming client is controlled by the content provider, seeding is easily forced and topology could be controlled centrally.
You should see how people try and get HLS to pick a stream. With the default players it's not possible - the client does it.
The server can control the stream by advertising a customized manifest to individual clients, although it's a bit silly. HLS is designed to be extremely easy to load distribute and throw CDN/cache proxies in front, and it's a bad sad that content providers are this bad at load management. :/
Either way, the assumption here is that you would swap out the client doing HLS with a client designed to use a P2P protocol, including the seeding portion and network health management.
I remember some Bittorrent networks circa 2005 or so which tried to monitor you and punish you for not contributing and this was a disaster for me since my upload is necessarily a small fraction of my download. What I found is that kind of network seemed to perform poorly even when I had a good bidirectional connection. As I saw it, people who have the ability to upload more than they download are a resource that that kind of network can't exploit if everybody is forced to upload as much as they download.
The point is to ensure network health with a metric that is simple to understand and verify: that you have been a productive. If you aren't seeding, someone else has to pick up the slack and the network didn't benefit from you obtaining the blocks.
The community itself benefits by giving members a guarantee that stuff there is available with abundant bandwidth, instead of relying purely on the unpaid goodwill of others.
And as a note, video segments for live are usually set to no cache, as are vod segments. The CDN does the caching. The client should keep segments around if you're doing rewind, but that's a client thing.
1. locally-randomizing segments download order
2. Create a larger buffer
3. Prioritize parts coming from slower connections
If I'm streaming live, I need the frame immediately, and it doesn't help much to get later frames after the frame I'm missing.
BT, on the other hand, is fundamentally designed for "many-to-many" distribution, where peers share pieces of content with each other over time. This isn't just a question of tweaking the protocol—it's a fundamentally different problem. Unless you're willing to compromise on the immediacy of the stream, using BT for true live streaming isn't really a good fit.
But you can't live stream a conversation with someone if you have a 10s delay.
No one wants to hear the cheering start at the neighbor's house ten seconds before you get to see the team score a goal.
And the order is requested by the client, and there are clients that go in the sequential order, like Deluge
And, sure, some BT clients can stream the data, but what the default is makes a huge difference.
Would you want to watch the beginning of something that didn’t have an ending? How frustrating would that be?
Perhaps but the time spent downloading it is also time spent uploading some of the file, so there's still some benefit. By having it in random order, you more evenly distribute the people with access to different parts of the file.
With streaming, if everyone downloads the same blocks at the same time, "bad actors" can dump all data they already watched to save disk space, harming potential peers that are watching slightly behind.
Unless you use public key cryptography, which is so expensive that nobody actually uses it for arbitrarily large inputs.
same with streaming audio, chunk IS static file, so every phone call you made last 30 years is static file.
Obviously at the end of the day its a string of bytes, like everything is, the difference i'm trying to get at is differences in how the data is used and requested.
Its more a social difference not a technical one.
and no reason for your MPV/VLC/PotPlayer to not render that in sequential order.
even when you have only first 2 pieces.
that one post is more to the topic of OP is asking for, than 90% of comments here.
again NEWS, movies, comedy, trumps tarrifs, are streamed digitally to bilions of people over dvb-t/c/s every day. if how are bits ordered/chunked is important for you that much, that this already working system is not good for you in that sense ? makes no sense.
or explain little more, one sentence explaining whole world is k12 like. or 42 for book readers.
In contrast to a live stream where everyone is viewing the same part at the same time, and once that part passes nobody is likely to view the old part ever again.
This makes a big difference in terms of network design.
if i can send 2 copies of piece to 2 people immediately as i got it, then if my download takes 20 ms and sending it another 20 ms is it "well seeded" for those 3 people after 50 ms? or after how much time it is "well seeded" ?
That being said I have a small correction. If you want to stream to two peers (that is you have a network with 3 fully connected nodes, one being the originator of the stream) and the link latency for all links is 20ms, then your lowest latency path to each node is exactly 20ms, as the originating node could simply send the content to each client.
The unfortunate realization is that 20ms is then also the optimal stream delay, assuming you care about the shortest possible latency for your live streaming service. The clients will therefore end up with the content exactly when they were supposed to show it, and therefore they would have no time to forward it to anyone else, lest that downstream would get the content AFTER they were supposed to show it.
people seem to have need for 0ms nano ultra low latency streams for watching movies,... they are insane. they want to be extraordinary high speed traders but with movies not stocks. insane
Video-on-demand is perfectly implementable on top of BitTorrent. As you say, there are some latency pitfalls you'll have to avoid, but that's nothing you can't hack yourself out of.
Livestreaming is a different beast. As you say, the problem with livestreaming is that everyone needs the same content at the same time. If I spend 200ms downloading the next 500ms worth of content, then there's nobody to share it with, they all spent the 200ms doing the same. BitTorrent relies on the time shift that is allowed between me downloading the content and you requesting it. If you request it before I've got it, well I can't fulfil that request, only the guy I intend to get it from can.
If you wanted to implement something like that, you would probably pick a tree of seeders where the protocol would pick a subset of trusted nodes that it would upload the content to before then allowing them to seed it, and the have them doing the same recursively.
That would obviously introduce a bunch of complexity and latency, and would very much not be BitTorrent anymore.
For example, say you have a cluster of people on the call in the US and another cluster in the UK. Ping times are 100ms or more across the ocean, there will be some random packets lost, but ping times within the UK are around 15ms max. By working co-operatively and sharing among themselves the clients in one cluster can fill in missing packets from a different cluster far quicker than the requesting them from the originating host.
In general, the ability to request missing packets from a more local source should be able to improve overall video call quality. It still might be "too late", because for minimal latency, you might choose to use packets as soon as they arrive and maybe even treat out-of-order packets as missing, and just display a blockier video instead, but if the clients can tolerate a little more latency (maybe a tunable setting, like 50ms more than the best case) then it should in theory work better than current systems.
I've been mulling over some of these ideas myself in the past, but it's never been high enough on my TODO list to try anything out.
That's only true if you assume the nodes operate sequentially, which is not given. If the nodes operate independently from one another (which they would, being non-cooperating) they'd all get a response in ~100ms (computation and signaling time is negligible here), which is faster than they could get it cooperatively, even if we assume perfect cooperation (100ms for the first local node + 15ms from there). It's parallelism. Doing less work might seem theoretically nice, but if you have the capacity to do the same work twice simultaneously you avoid the synchronization.
Basically, it falls somewhere in my loose "tree based system" sketch. In this case the "trusted" nodes would be picked based on ping time clustering, but the basic sketch that you pick a subset of nodes to be your local nodes and then let that structure recursively play out is the same.
The problem you run into is latency. There's no good way to pick a global latency figure for the whole network, since it varies by how deep into the tree you are. As the tree grows deeper, you end up having to retune the delay. The only other option is to grow in width at which point you've just created a another linear growth problem, albeit with a lower slope.
E.g. if you arrange the network into a tree like that, you need to make sure all nodes are matched appropriately in terms of bandwidth, latency, geography, number of connected nodes. Now you have to somehow the network topology stays good in face of churn and bad peers. Suddenly everything is complicated and not looking very p2p.
Maybe dufferent protocols are possible to manage that, but i think there is a reason why p2p protocols kind of didn't develop much beyond bit torrent.
Just a mental curiosity is all.
That's basically true for one client (transmission) - who specifically refuses to allow linear ordering. Most clients implement this.
To enable it, its about a 3 sloc change.
I hate clients that dont work for the user.
Transmission does allow you to set a particular file (let's say the first file in a series) as 'high priority' though so it's not like they don't allow any change to the behavior
This P2P stack was meant to allow for mass scaling of lowish latency video streaming, even in parts of the World with limited peer bandwidth to original content source servers. The VC-1 format got into a legal quagmire, as most video streaming protocols do, and it speaks volumes that by the time I turned up in ~2012-ish, the entire stack was RTMP, RTSP, HDS and HLS with zero evidence of that P2P tech stack in production.
My main role was to get the ingest stack out of a DC and into cloud, while also dealing with a myriad of poor design decisions that led to issues (yes, that 2013 outage in the first paragraph of the wiki article was on my watch).
At no point did anybody suggest to me that what we really needed to fix our attention back to was P2P streaming, beyond the fact the company built a version of Periscope (Twitter's first live streaming product), and launched it weeks/months before they did, and were pivoting towards a social media platform, at which point I decided to go do other things.
The technical and legal problems are real, and covered elsewhere here. People want reliable delivery. Even Spotify, YouTube and others who have licensed content and could save a pile by moving to DRM-ified P2P don't go near it, and that should tell you something about the challenges.
I'd love more widespread adoption of P2P tech, but not convinced we'll ever see it in AV any time soon, unfortunately.
[0] https://en.wikipedia.org/wiki/LiveStation
Thank you for bringing up the warm memories I thought I no longer had.
"Why's my internet slow? Oh, YouTube is uploading a bunch of stuff to other people"
"How did I hit my bandwidth cap for the month already? Oh, youtube is..."
Secondly, the p2p system will be advantageous for the videos that most people watch, i.e., popular videos. This implies that the "popular" video will have a large number of concurrent users who are transmitting a small part of video to just 3 other peers who are then transmitting the same part to 3 other peers.
This way, the bandwidth usage for uploading is reduced.
Key part of that tech was that it synchronized the playback between all peers. That was nice for stock market announcements and sport events for example.
https://web.archive.org/web/20131208173255/http://splitcast....
https://www.youtube.com/watch?v=R5UYu9jeQbY
https://www.crunchbase.com/organization/splitcast-technology
For 'hobbyists' there is a lot of complexity with setting up your own streaming infrastructure compared to just using YouTube or Twitch.
Then for media companies who want to own it, they can just buy their own infra and networking which is outrageously cheap. HE.net advertises 40gbit/sec of transit for $2200/month. I'm oversimplifying this somewhat, you do have issues with cheap transit and probably need backups especially for certain regions. But there isn't much of a middleground between hobbyists and big media cos.
For piracy (live sports streams), I've read about https://en.wikipedia.org/wiki/Ace_Stream being used for this exact purposes FWIW. This was a while back but I know it had a lot of traction at one point.
Minimum latency broadcast forms a B tree. A tree is by definition not peer to peer. The number of branches per node is upload speed divided by bandwidth of the stream. This branching factor is extremely low for residential internet with asymmetric high download and low upload speeds.
Once you add malicious adversaries in the P2P network or poor network connectivity, each client will need to read multiple streams via erasure encoding at once and switch over, when a node loses its connection.
In my opinion, NAT and the extensive tracking that has led users to distrust sharing their IP addresses are the reasons why it hasn't caught on.
Imagine YouTube using P2P technology, it would save lot of money spent on caching servers.
> Imagine YouTube using P2P technology, it would save lot of money spent on caching servers.
I think its money well spent.
There is a lag between the source and the audience, maybe it's been improved in the last 4 years though, not sure.
I couldn't find much docs on how it works, just https://docs.joinpeertube.org/contribute/architecture#live
Sounds like they break the stream into very small segments and publish each of those with bit torrent (?), they seem to claim about 30 second delay and scale in the hundreds but not thousands. Certainly impressive if true, i wouldnt of thought such an approach would scale so well. Of course its still a far cry from twitch, but nonetheless impressive.
I remember it as it was one of rare apps built in XUL, the same framework as Mozilla apps (Firefox).
https://en.m.wikipedia.org/wiki/Joost
In general people aren't tolerant of lag and spinning circles and other such things when they're trying to watch streaming content. If you're fine with just watching it a little bit later might as well queue it up and left the whole thing down load so it's ready when you're ready.
Popcorn Time got taken down pretty hard because they became too popular too fast.
A commercial solution could have a seed server optimized for streaming the initial segments of video files to kickstart the stream, and let basic torrents deal with the rest of the stream.
The main reason I would think it would be useful is 1. since streaming sites seem to lose a lot of money and 2. sports streams are really bad, even paid ones. I have dazn and two other sports streaming services and they still lag and are only 720p
I think you would probably need something more in the neighbourhood of 10 minutes to really make a difference. If you could make a stable p2p live streaming app with the number of peers all watching the same stream in the hundreds and only 30 seconds latency, i'd consider that pretty amazing.
> Also do you think there's any way you can prioritize seeders in such a protocol? like some kind of algorithm that the more you share the more you're prioritized in getting the most up to date packets.
If we are talking about a livestream (and not "netflix" type streaming) then i don't think seeders are a thing. You can't seed a file that isn't finished being created yet.
If you mean more generally punishing free-riders, i think that is difficult in a live stream as generally data would be coming in from a different set of peers than the peers you are sending data out to, so its difficult (maybe not impossible) to know who is misbehaving.
It's similar to popcorn time that was killed by legal ways so I'd say they did take off.
Stremio smartly avoids being killed by making pirating an optional plugin you have to install from another site so they get deniability.
It works well and save my ass from needing 1000s' of subscriptions.
But the reality is for 99% of people Youtube and Twitch work just fine.
Plus most residential ISPs have really poor upload speed, and very restrictive data caps.
The former don't want to use it as it degrades their control over the content, and the later don't want to make a new system cause systems that are built on torrents are good enough.
I then left and the company later got acquired by Level 3 so I don't know exactly how it evolved but it's likely that they abandoned the illegal streaming market for reputational reasons and stuck with big players.
It just struck me that there are probably plenty of large media companies that use all sorts of proprietary video streaming products for distribution that we've never heard of, simply because the tech isn't available to consumers.
Media companies are generally pretty secretive about their tech (Netflix being the exception to this rule), so there isn't much to be found about this. The piracy community (because, let's be real here) also won't be interested in a non-free (speech and beer) streaming solutions like these. So that's probably why there is just very little public information available.
But if you use paid digital TV products (Eurosport being a perfect example here) then you are probably already using all sorts of P2P streaming protocols you've never heard of.
Encryption (can work with sharing), signatures, fall back to CDN. Control is not an issue.
> torrents are good enough.
Torrents can't do the massive market of livestream, like sports or season finales or reality TV / news. This is the entire point of the question.
> The only entities
And everyone kicked off of YouTube or doesn't want to use big corporations on principal, like Hacker Cons or the open source community.
And of course if an encryption key gets leaked, you can just rotate it. Since it’s a stream, past content is not as important.
(That said, I don’t think it will help — any DRM can be cracked, and there’s plenty of online TV streaming sites even with the current centralized systems.)
or very similar point - i had conversation with some big youtuber and person was confused why he is not more popular with certain demographic. reason was that said demographic was watching on big TV and content he was filming was big head directly in front of camera. so they do not like having 3 feet big head right in front of them... most young people watch things on mobile..
Our library is general purpose and can be used whenever you need direct connections, but on top of Iroh we also provide iroh-blobs, which provides BLAKE3 verified streaming over our QUIC connections.
Blobs currently is a library that provides low level primitives and point to point streaming (see e.g. https://www.iroh.computer/sendme as an example/demo )
We are currently working on extending blobs to also allow easy concurrent downloading from multiple providers. We will also provide pluggable content discovery mechanisms as well as a lightweight content tracker implementation.
There is an experimental tracker here: https://github.com/n0-computer/iroh-experiments/tree/main/co...
Due to the properties of the BLAKE3 tree hash you can start sharing content even before you have completely downloaded it, so blobs is very well suited to the use case described above.
We already did a few explorations regarding media streaming over iroh connections, see for example https://www.youtube.com/watch?v=K3qqyu1mmGQ .
The big advantage of iroh over bittorrent is that content can be shared efficiently from even behind routers that don't allow manual or automatic port mapping, such as many carrier grade NAT setups.
Another advantage that BLAKE3 has over the bittorrent protocol is that content is verified incrementally. If somebody sends you wrong data you will notice after at most ~16 KiB. Bittorrent has something similar in the form of piece hashes, but those are more coarse grained. Also, BLAKE3 is extremely fast due to a very SIMD friendly design.
We are big fans of bittorrent and actually use parts of bittorrent, the mainline DHT, for our node discovery.
Here is a talk from last year explaining how iroh works in detail: https://www.youtube.com/watch?v=uj-7Y_7p4Dg , also briefly covering the blobs protocol.
The real reason is centralised architecture gives them control and ability to extract rent.
What are you talking about?
YouTube has a lot more costs than bandwidth. And a lot of ads and Premium revenue goes to creators.
Surprisingly the channels that are available work really well if you just use the mpegts stream.
In a past life I've added a few channels to a tvheadend instance on a VPS. It reliable crashed Kodi watching some channels and I've wondered if it's just broken streams or something more interesting is going on.
If you open the ports and watch popular channels it's easily saturating bandwidth - there is no limit.
I've since stopped using it it's the kind of thing that breaks not often enough to be not useless but often enough to be annoying.
It's IPv4 only and seems to use it's own tracker or at least calls to some URLs for initial peer discovery.
Building something similar as true open source would be great but I guess the usecase is mostly illegal streaming.
Be careful - it's attempting to use upnp to open ports on the router and even if just looking through the lists makes you upload fragments.
Still fascinating tool. It's getting to close to what op is looking for but I think it has scalability issues and everything about it is kind of shady and opaque.
I was hopeful about bittorrent-live when that was announced, but they didn't open source that for some reason either.
Churn & reliability: Peers come and go, making stable streaming tricky. Latency: BitTorrent-style protocols aren’t built for real-time delivery. Incentives: Without rewards, too many users just leech. WebRTC: It hits limits fast and often relies on centralized relays. Legal risks: Media companies don’t play nice with decentralized distribution.
Bram Cohen tried with BitTorrent Live, but it fizzled out. Would love to see someone revive this with modern tech — still feels like untapped potential.
They use bao hashing which is something that I discovered through them (IIRC) and its really nice.
Could create such a protocol though bittorrent/ipfs is fine
I once wanted to create a website which was just a static website.
and I used some ipfs gateway to push it with my browser and got a link of that static website, all anonymous.
Kind of great tbh.
There are other genuinely useful crypto projects (like Monero for privacy and I don't like the idea of smart contracts)
I really want to tell you the fact that most crypto is scam. These guys first went into crypto and now I am seeing so much crypto + AI.
As someone who genuinely is interested in crypto from a technology (decentralization perspective)
I see transactions as a byproduct not the end result & I see people wanting to earn a quick buck feel really weird.
Also crypto isn't safe. I just think like now its better to correlate as a tech stock though 99% of the time, its run by scams, so absolutely worse.
The technology is still fascinating. But just because the technology is fascinating doesn't mean its valuable. Many people are overselling their stuff.
That being said, I have actually managed to use crypto to create a permanent storage (something like ipfs but its forced to store it forever) , so I think this can be used where anonymity/decentralized is required. But still, this thing could be done without including money in the process as well & crypto is still not as decentralized as one might imagine.
Iroh contributor here. I don't know what you are referring to. Iroh is just a library to provide direct QUIC connections between devices, even if they are behind a NAT. We don't have any plans doing a blockchain or an ICO or anything like that.
I am not aware of any project called Iroh that is a scam, but if there is, please provide a link here. It's not us.
I know there have been some scammers trying to make a BLAKE3 coin or something, a year ago.
My only gripe with iroh currently is that its browser wasm feels too much for me/ I don't want to learn rust.
So I actually wanted to build something that required connectivity and I used nostr because nostr is great for website and not gonna lie ,its awesome as well (but nostr is also riddled with crypto bros :( )
I have nothing against crypto in principle, but I really don't want Iroh to be associated with crypto scams.
Iroh is just a library for p2p connections. You can use it for crypto, but I would say that the majority of our users are non-crypto(currency).
We will try to make the wasm version easier to use, but if nostr works well for you, go for it! Not the right place if you want to avoid crypto bros though :-)
1. Hybrid architecture (CDN+P2P): - Use CDN to process backbone traffic, and edge nodes distribute through P2P to reduce the pressure on the central server (such as LivePeer trying to combine blockchain and P2P). - Platforms such as Youku have experimented with such solutions, but they need to weigh the cost and effect.
2. Protocol optimization: - Sliced transmission: Divide the streaming media into small pieces and improve efficiency through multi-path transmission. - Dynamic priority: Dynamically adjust the data allocation strategy according to node bandwidth and latency. - Buffering and preloading: Allow users to tolerate higher latency in exchange for more stable transmission (such as HLS/DASH protocol ideas).
3. Decentralized network exploration: - Projects such as IPFS and BitTorrent Live have tried real-time streaming, but are limited by technical maturity and ecological support. - Web3 projects (such as Theta Network) combine token incentives to encourage nodes to contribute bandwidth, which may promote development.
For livestreams there's AceStream built on BitTorrent, but I think it's closed-source. They do have some SDK but I never looked into it. It's mostly used by IPTV pirates. I've used it a few times and it's hit-or-miss but when it works well I have been able to watch livestreams in HD/FullHD without cuts. Latency is always very bad though.
Then for video-on-demand there are some web-based ones like PeerTube (FOSS) and I think BitChute? Sadly webtorrent is very limited.
Besides bandwidth problems (as you can't 100% rely on remote connections), any P2P solution would mean the same fragment will be shared many times between clients; something CDN networks have solved already (just serving content, instead of juggling with signalling)
Such a shame that it failed, nothing after it ever came close.
We had torrent client/streaming video players maybe 20 years ago already.
> Does anyone have knowledge on why it isn't a thing still though?
It is a thing, it seems you didn't do your research.
There's articles all over the interweb if you went and looked, such as
https://www.makeuseof.com/best-torrent-streaming-apps/
Netflix famously offers ISPs an appliance.
And if you pay for the streaming, why would you donate your bandwidth to them? Would you get a discount?
Live events, e.g. sports?
"why would you donate your bandwidth to them?"
I don't know but people donate bandwidth for torrents, maybe it's 'free' for them?
I believe pirating is seen an alternative to paying through the nose?
I pay through the nose for most live sports I watch.
It is a thing.
For live streaming there is WebRTC. It is also a thing.
All of it started with the webtorrent project though. One of the first demos was booting Ubuntu while streaming the incomplete live ISO image, quite impressive for the time.
This is great tech for media files. Currently better than any other. But making it would make those media files very easy to redistribute, and it is hard to change that without loosing the P2Pness goodies.
If Popcorn Time had a synchronized multi-resolution catalog, bandwidth-sensitive auto switch and some paid seed servers, it would be better than any other streaming service (technically speaking).
0: https://github.com/pldubouilh/live-torrent
Modern streaming protocols sometimes go to absurd lengths to avoid too many hops so you get the data as soon as possible... torrent has so many jumps and negotiations to get to the actual file. It's good for decentralization but decentralization and efficiency go against each other.
One possibility as you allude to is licensing. In a P2P streaming model “rights” holders want to collect royalties on content distribution. I’m not sure of a way you could make this feel legal short of abolishing copyright, but if you could build a way to fairly collect royalties, I wonder if you’d make inroads with enforcers. But overall that problem seems to have been solved with ads and subscription fees.
Another data point is that the behemoths decided to serve content digitally. Netflix and Spotify showed up. The reason the general population torrented music is because other than a CD changer, having a digital library was a requirement in order to listen to big playlists of songs on your… Zune. Or iPod. That problem doesn't exist anymore and so the demand dried up. There was also an audiophile scene but afaik with Apple Lossless the demand there has diminished too.
And finally, since people were solving the problem for real, we also entertained big deal solutions to reduce the strain on the network. If you stream P2P your packets take the slow lane. Netflix and other content providers build out hardware colocated with last mile ISPs so that content distribution can happen even more efficiently than in a P2P model.
In short: steaming turned into a real “industry”. Innovators and capitalists threw lots of time and money at the problem. Streaming platforms emerged, for better and for worse. And here we are today, on the cusp of repeating the past because short sighted business mongers have balkanized access with exclusive content libraries for the user numbers.
If the goal is to cut costs — like vendors trying to avoid AWS/CDN bills — that’s a very different problem than building for censorship resistance or resilience.
Without a clear “why,” the tradeoffs (latency, peer churn, unpredictable bandwidth) are hard to justify. Centralized infra is boring but reliable — and maybe that's good enough for 99% of use cases.
The interesting question is: what’s the niche where the pain is big enough to make P2P worth it?
IPv6 multicast is probably the way forward for livestreams but I haven't really been keeping up on recent developments. In theory there could be dynamic registrations of multicast addresses that ISPs could opt-in to subscribe to and route for their customers.
it is insane to me, for people to have need to watch toxic channels like LinusTechTips livestream, regurgitating weeks old toxic marketing disinformation and having need to have that 0ms latency... XD
why everyone needs low latency for one way stream? unnecessary hurdle just to have that hurdle. no benefit to anything.
but agree with you that if companies already forget existence of IPv4, internet will be simpler, faster and more usable. for less price for everyone.
Orchestrating p2p realtime video distribution is going to have a lot of problems, and spend VC money until someone acquires you is just a lot easier.
Here's a small list of challenges you'd face:
You'll need to have a pretty good distribution network to handle users who just can't manage to p2p connect.
Figuring out the right amount of user's bandwidth you can use without people getting upset; there's a lot of internet accounts with bandwidth quotas, especially for mobile
Trying to arrange so that users connect to users with the least transmission delays would be needed to reduce overall latency. Between cross oceanic connections having unavoidable latency, the potential of buffer bloat, and having a reasonable jitter buffer, pretty soon you have wild delays and potential rebuffering.
Bandwidth constraints / layer switching is going to be a big challenge; it's one thing when your server can just push the best stream the client can manage, but if you're streaming from a peer and the stream is too big, the peer probably doesn't have a smaller stream to switch to and there's no good way to know if where the bandwidth constraint is ... maybe you should switch to the same stream from someone else or maybe you should switch to a smaller stream. Can you get even packets from one peer and odd packets from another ... should you?
Or to get back to your original question: https://docs.joinpeertube.org/use/create-upload-video
edit: Your'e not limited to these addresses, for one there are other instances, for another you can selfhost your own, if your'e into that.
Technically that is one of many possible solutions, 'ready to roll' right now.
addit: Regarding sustainability, and who is behind it, maybe https://framasoft.org/en/ would be of interest?
Linked from there https://framablog.org/2024/12/17/peertube-v7-offer-a-complet...
and
https://framablog.org/2025/04/10/2025-peertube-roadmap/
I just meant like never caught on as in like it's not super popular, but looks like it's on the come up. would be nice to have a real youtube competitor lol
If that's your thing. And you have some sort of presence online elsewhere, then you can link to peertube, no matter which, or selfhosted, without problem.
That's why I pointed you to it. If you need/want the most massive audience, because of platform familarity/network effect, then probably not. At least not now. But someone has to start somehow :)
Even “modern” cities like NYC are limited to a MAXIMUM of 30Mbps upstream due to ISP monopolies and red tape.
It’s getting better, but Spectrum is still literally the only ISP available for many city residents, and their offerings are so lopsided that their highest-end package is a whopping 980/30.
That’s right. If you use the majority of that 980Mbps your IP overhead will gladly take that 30Mbps, leaving you with just about Zero headroom for anything else.
* Asymmetric network links, slow upload especially on cellular
* Traffic package limitations, and both DL and UL are counted
* Some ISP are very against p2p, sometimes it's a government policy (China banned "Residential CDNs")
* NAT
https://www.bittorrent.com/blog/2016/05/17/bittorrent-live-m...
torrent PROTOCOL does not require to download random pieces, in random order.
ONLY bittorrent, inc. COMPANY which releases "Utorrent" and "Bittorent" NAMED APPLICATIONS/PROGRAMS does not want legal trouble from media/music companies. Because STREAMING is other legal category then downloading. There is no other reason for torrent PROTOCOL to not deliver file pieces in sequential order.
if you need instant nanosecond delayed stream, those does not exist anywhere, even radio, tv stations over the air are delayed so they all transmit synchronized. so 0 latency and synchronized can be mistaken for each other.
> if you need instant nanosecond delayed stream
I believe nobody was suggesting that.
"super seeding" is a different feature where a seed won't upload more pieces to a peer unless a previously uploaded piece has been distributed to another peer first.
it is ONLY important when you need to not have people (SWARM) finishing downloading of torrent then closing torrent client app and not sending data chunks to next person.
BUT everyone is saying it is stream and has to be instant showing of picture/video.
so i do not understand why all those people in other comments are caring about state of swarm if we do have thousands people watching and everybody is saying there are big amounts of people watching but still caring about swarm..
(swarm thing is important with normal use case of BitTorrent, irrelevant for streaming)
i understand what they are saying, they do not understand that they are saying nonsense.
Tailscale (or any other P2P overlay network) could solve this problem by re-enabling the multicast support that most ISPs block. It's not a terrible idea.
Edit: a comment elsewhere linked https://www.librecast.net/librecast.html which seems to be doing exactly this.
> To enable multicast on the unicast Internet we start by building an encrypted overlay network using point-to-point links between participating nodes. Once established, our overlay network can run whatever protocols we require, unimpeded by routers and middleboxes and which is resistant to interception, interference and netblocks.
[0] https://www.librecast.net/librecast-strategy-2025.html
[1] https://spectra.video/w/9cBGzMceGAjVfw4eFV78D2
Off-topic but I'm impressed with how many potentially revolutionary projects get funding from NLNet.
But after working in ISP for a while I realised that the issue is getting ISP's to use cool protocols is just impossible and everything must be built at higher levels.
https://github.com/johang/vlc-bittorrent/
0. https://ja.wikipedia.org/wiki/PeerCast
https://webtorrent.io/
You'd need to make a UI for it
Source: I worked on the Twitch video system for 6 years.
I work on low latency and live broadcast. The appropriate latency of any video stream is the entire duration of it. Nobody else seems to share this opinion though.
Plus instead of a million people all wanting to watch Spider-Man 2, those million people have infinite options of short videos or whatever to watch. The desire to watch A Specific Video isn’t what it used to be.
Times have changed and P2P as a common way of sharing stuff is dead to the average person.
You might want to look into the tradeoffs Discord decided to go with, https://discord.com/blog/how-discord-handles-two-and-half-mi....
Here's some boilerplate for rolling your own, https://blog.swmansion.com/building-a-globally-distributed-w....
In theory you could gain resilience from a P2P architecture but you're going to have to sacrifice some degree of live-ness, i.e. have rendering clients hold relatively large buffers, to handle jitters, network problems, hostile nodes and so on.
If you have [say] a 640 MB recording at 120 fps you would only need to successfully receive 2.5 MB at 30 fps to be able to watch the entire thing. With a slight delay in playback you could even hop from one sub set of channels to another.
It should work offline too. One could have the cutting edge crispy resolution on a large display or watch the same on a crappy old laptop. (and everything in between)
For fun I one time convert a 3.5 hour lecture to 75 MB and was stunned by how watchable it still was.
one issue I can imagine would be that each part would discover peers independently where assumption that most peers of previous parts should be expected to also have those files.
second idea would be to use ipfs in that way instead of torrent. that would probably have much easier time for reusing peer discovery between parts and also would solve issue when to stop seeding as this is already build in into protocol.
I guess that creating distributed twitch basing on ipfs would be feasible but not sure how many people would like to install ipfs node before that could use that. that's kind of chicken and egg problem, you need a lot of people before this system starts work really well, but to get interest it need to really perform well so people would migrate from twitch like services.
ofc you can use public gateways. afaik cloudflare have public ipfs endpoint that could serve as fallback