I then resold the technology a few times to some people who wanted the similar thing but wouldn't have the issues I had, then I moved on and forgot it all. In the end, the project collapsed due to non-technical issues, and me loosing interest and doing more legitimate and useful things. I instead remuxed files into TS container, indexed all the files, made JSON manifests, and had a network of reverse proxy "CDN" servers on continents that would pull the files from a few central servers, cache the small virtual chunks, which were created by reading byte offsets from the file and serving it as a "file" with PHP. This could've obviously been done with HLS or DASH, but that required remuxing the files and keeping lots of them. I re-created (I haven't invented this, obviously) a way to split the video file into keyframe segments and mark down the start byte offsets of the keyframes, and then I could "virtually" split the file for streaming, so that a user wouldn't 1) buffer the whole file, 2) need to have the whole file to share to others (P2P in the browser), 3) need to restart the stream and sharing if the connection broke. Yes, indeed, I had worked on a small project of mine and needed a really efficient and cheap way to serve terabytes of data to a lot of users, fast.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |