Faster end game
(New page: Faster Endgame aka Dynamic Block Requests This describes techniques which shall speed up the file completion. == Netfinitys Dynamic Block Requests == Faster endgame by not requesting ...) |
|||
(9 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
− | + | Faster Endgame AKA ''Dynamic Block Requests'' is a technique employed for faster file completion. | |
− | |||
+ | == Netfinity's Dynamic Block Requests == | ||
− | + | Faster endgame by not requesting many blocks if the downloading file is near completion. | |
− | + | Downloading procedure goes as follows: | |
+ | *Connection to a peer is made and it is determined what [[chunk]]s the peer has. | ||
+ | *A Data range on the local client is reserved for downloading. This a small part of the file. That same data range will not be requested from other peers. | ||
+ | *That data range is then requested from the peer. | ||
− | + | So, by requesting fewer blocks from slower peers we can request more/bigger blocks from faster peers, which results in a speed increase and faster file completion. | |
− | + | This feature only makes sense if a small part of a downloading file/[[chunk]] remains. | |
+ | An earlier implementation included "dropping" of sources that are too slow in order to allow faster clients to take over. Dropping sources is not always a good idea since sources that are slow now might become fast sources after a while (e.g. if you're on a trickle slot). | ||
− | == | + | === Netfinity's Explanation === |
+ | '''Phase 1: Block request sizes''' | ||
− | + | *1. Estimate the time to completion. | |
+ | time_to_complete = bytes_left_to_download / datarate_all_sources | ||
+ | *2. Estimate the number of bytes to request so that source doesn't complete later than 10 seconds after the estimated file completion. | ||
+ | bytes_to_request = source_datarate / (time_to_complete + 10) | ||
+ | |||
+ | *3. Split the request in three to fill the OP_PARTREQ, but round numbers to nearest 10kB. | ||
+ | |||
+ | '''Phase 2: Put the source on hold if it can't request enought bytes, without breaking the completion estimate''' | ||
+ | |||
+ | This means that if calculation in phase 1 returned less than 10kB then the source is put on hold for 20 seconds, then it's dropped if no new blocks becomes available. Note this is only done when the source has sent all the pieces we requested. This is done in case the other sources fails or the part is corrupt, then we can recontinue download with this source. | ||
+ | |||
+ | '''Phase 3: Dropping slow sources''' | ||
+ | |||
+ | This phase is only entered if a fast source can't allocate any more blocks. Here we drop in the middle of a transaction, thus data will be lost. | ||
+ | |||
+ | *1. Find the slowest source that is slower than 1kB/s and would break the completion estimate. | ||
+ | The calculations is basicaly the same as in phase 1. | ||
+ | |||
+ | *2. Cancel the download and free all allocated blocks | ||
+ | This causes data loss (upto ~20kB), as there is likely to be incomplete packets transfer. You have to take much care when doing this! | ||
+ | |||
+ | |||
+ | NOTE! My actual algorithm is much more complex, but this is the primary basics. Also, I always keep my request within the 180kB AICH block bounds in hope to handle curruption better. | ||
+ | |||
+ | /netfinity | ||
+ | |||
+ | This text has taken from http://forum.emule-project.net/index.php?showtopic=92937&view=findpost&p=662988 | ||
+ | |||
+ | == Dazzle's Faster Endgame == | ||
+ | |||
+ | Another implementation is "Dazzle's faster endgame" which simply drops the slowest source from a file if no more block requests can be created. This is bad because block requesting might also fail if a peer has no more blocks for us (No Needed Part Source), thus dropping a source would be very bad in such a situation and won't help at all. | ||
+ | |||
+ | How is this bad? The source could be used for A4AF. What else? --[[User:134.130.183.101|134.130.183.101]] 05:07, 22 August 2008 (CEST) | ||
== Morph Approach == | == Morph Approach == | ||
− | I've seen that Morph includes an own version of DBR, though I | + | TODO: |
+ | I've seen that Morph includes an own version of DBR, though I have not had any time to check it in detail... if anyone knows more, please add it --WiZaRd 21:34, 15 Jun 2006 (CEST) | ||
== Official Approach == | == Official Approach == | ||
− | The official client partially adapted | + | The official client partially adapted Netfinity's feature by introducing two new features: |
− | + | * if a file is near completion and the download speed of a source is pretty low then fewer block requests will be created for that client. | |
− | + | * block requests are reduced in size to avoid "already requested ranges". | |
== Conclusion == | == Conclusion == | ||
− | + | Netfinity's original implementation is by far superior to any present implementation as it combines better dynamic block requests than the version in ESE mod plus better/more intelligent slow source dropping than Dazzle's version. | |
[[category:features]] | [[category:features]] |
Latest revision as of 09:06, 24 November 2010
Faster Endgame AKA Dynamic Block Requests is a technique employed for faster file completion.
Contents |
[edit] Netfinity's Dynamic Block Requests
Faster endgame by not requesting many blocks if the downloading file is near completion.
Downloading procedure goes as follows:
- Connection to a peer is made and it is determined what chunks the peer has.
- A Data range on the local client is reserved for downloading. This a small part of the file. That same data range will not be requested from other peers.
- That data range is then requested from the peer.
So, by requesting fewer blocks from slower peers we can request more/bigger blocks from faster peers, which results in a speed increase and faster file completion.
This feature only makes sense if a small part of a downloading file/chunk remains.
An earlier implementation included "dropping" of sources that are too slow in order to allow faster clients to take over. Dropping sources is not always a good idea since sources that are slow now might become fast sources after a while (e.g. if you're on a trickle slot).
[edit] Netfinity's Explanation
Phase 1: Block request sizes
- 1. Estimate the time to completion.
time_to_complete = bytes_left_to_download / datarate_all_sources
- 2. Estimate the number of bytes to request so that source doesn't complete later than 10 seconds after the estimated file completion.
bytes_to_request = source_datarate / (time_to_complete + 10)
- 3. Split the request in three to fill the OP_PARTREQ, but round numbers to nearest 10kB.
Phase 2: Put the source on hold if it can't request enought bytes, without breaking the completion estimate
This means that if calculation in phase 1 returned less than 10kB then the source is put on hold for 20 seconds, then it's dropped if no new blocks becomes available. Note this is only done when the source has sent all the pieces we requested. This is done in case the other sources fails or the part is corrupt, then we can recontinue download with this source.
Phase 3: Dropping slow sources
This phase is only entered if a fast source can't allocate any more blocks. Here we drop in the middle of a transaction, thus data will be lost.
- 1. Find the slowest source that is slower than 1kB/s and would break the completion estimate.
The calculations is basicaly the same as in phase 1.
- 2. Cancel the download and free all allocated blocks
This causes data loss (upto ~20kB), as there is likely to be incomplete packets transfer. You have to take much care when doing this!
NOTE! My actual algorithm is much more complex, but this is the primary basics. Also, I always keep my request within the 180kB AICH block bounds in hope to handle curruption better.
/netfinity
This text has taken from http://forum.emule-project.net/index.php?showtopic=92937&view=findpost&p=662988
[edit] Dazzle's Faster Endgame
Another implementation is "Dazzle's faster endgame" which simply drops the slowest source from a file if no more block requests can be created. This is bad because block requesting might also fail if a peer has no more blocks for us (No Needed Part Source), thus dropping a source would be very bad in such a situation and won't help at all.
How is this bad? The source could be used for A4AF. What else? --134.130.183.101 05:07, 22 August 2008 (CEST)
[edit] Morph Approach
TODO: I've seen that Morph includes an own version of DBR, though I have not had any time to check it in detail... if anyone knows more, please add it --WiZaRd 21:34, 15 Jun 2006 (CEST)
[edit] Official Approach
The official client partially adapted Netfinity's feature by introducing two new features:
- if a file is near completion and the download speed of a source is pretty low then fewer block requests will be created for that client.
- block requests are reduced in size to avoid "already requested ranges".
[edit] Conclusion
Netfinity's original implementation is by far superior to any present implementation as it combines better dynamic block requests than the version in ESE mod plus better/more intelligent slow source dropping than Dazzle's version.