One of the absolute largest parts of internet traffic today is video traffic. In 2015 video represented 70% of all traffic on the internet, and this volume is expected to grow to three times as much in 2020, and at will that time representing 83% of all traffic (source: Cisco). Since basically all commercially available models of video distribution are based on unicast (where each viewer has its own direct connection to the back-end) this puts enormous pressure on the back-end infrastructure. The operators (both OTT providers like Netflix and operators like Ericsson) are trying to remedy this by putting edge nodes (basically servers with big discs that act as local caches) as close to the customers as possible, but this is obviously a very expensive solution, especially in areas where population is sparse.
To remedy this a solution that utilizes the fact that many people watch the same things has been developed. It enables material that is being downloaded and watched to be redistributed to others that want to see the same video. This solution is a hybrid of a peer-to-peer solution and a classic CDN solution, where material being watched by many users is distributed locally between users, and material that is unusual to watch is being distributed directly from the CDN to the end user. The solution works for both VOD (Video On Demand) and Live video streams, with a minimum delay added. The amount of data offloaded from the CDN and instead distributed between devices can in good conditions be as high as 98%. The implementation is patented, and it is designed to be self-managing, and is not reliant on any servers for communication except for initial initialization on startup.
The algorithms used to do the actual distribution the data are straight forward, and have been used in real scenarios for many years. The difficult part is optimizing when to get data from peers, what to get from whom, and when it is better to get data from the CDN, even though it might be available from peers. This becomes extra complicated in a Live environment, when everything changes rapidly, and decisions must be taken very quickly. Additionally – these problems become extra difficult to solve when they are intentionally built to not have a central server for information exchange and system optimization, but instead all the clients create a big cloud that is self-regulating.
When developing and optimizing these algorithms a major obstacle is how to test if the proposed changes improve the overall performance. Due to the nature of peer-to-peer networks many nodes are required to test the actual performance, and preferably these clients also need to have different network conditions, have varying distance (latency and throughput) between each other etc. etc. Testing this in real conditions is very costly (more than $100 000 per test), and takes a very long time since real users and devices are required.
The goal of this master thesis is to develop a system to simulate the real-life testing mentioned above with a virtual system. The real algorithms for distribution and prioritization need to be used (or perhaps even the real clients for different platforms?), but in the simulation system the number of users, their network conditions, what they watch and when etc. is configurable and simulated, and it also should be possible to test it in faster than real time (preferably many times faster). The goal is to create a system for the developers of the algorithms to be able to repeatedly test and track the effects of their changes, and use the output of the tests to then further improve the algorithms.
If it also is within the given timeframe to visualize the simulation of the simulated nodes and their communication plotted on a map it would be very interesting. A further improvement of the system could also be to enable collection of data from real end users and then plot that on a map. This would both be a very attractive tool for the customers to see, and be part of confirming the validity of the simulation system.
The core product is written in C, with interfaces available to make the system cross-platform. Integrations are currently available for iOS, Android, Mac, Windows and Linux. The simulation platform can either be written as an external system running real clients, or as an integrated module that is part of the C system, to have good access to all parts of the system. To be successful in this work we think that good analysis skills, good knowledge of real time algorithms, as well as a fair level of experience in programming effective and optimized systems is very helpful.
Entecon is a consultant company based in Stockholm, Sweden originally funded in 2008. The founders and owners are all graduates from the Engineering Physics Program of KTH. We have a very skilled and experienced team that help our customers improve their business by taking responsibility of the entire product development process, from product management to architecture, development, support, and operations. Our location is in downtown Stockholm, and we expect the person (or persons?) doing this work to be based out of our offices. Entecon assists customers from around the world, with a focus on USA and Sweden. The current end customer for this project is Voddler Sweden AB.
Time: 20 weeks
Start date: We expect work start in spring or early summer of 2017. We will review candidates as they apply.