Hi,
i would like to (officially) request inclusion of the ZLine extention into PtokaX. Depending on your hubsoft architecture, this will allow bandwidth reductions of upto 50% per user.
For more information, please check out the DC++ Wiki (http://wiki.dcpp.net/index.php/Talk:NMDC_Client-Hub_Protocol#NMDC_ZLine_Extension). For a DC++ patch, check out the DC++ bugzilla (http://dcpp.net/bugzilla/show_bug.cgi?id=704).
A list of clients and hubsofts supporting this extension can be be found on the DC++ Wiki (http://wiki.dcpp.net/index.php/Talk:NMDC_Client-Hub_Protocol#NMDC_ZLine_Extension).
thank you
This would affect data-sending funbctions in the API (at least the instructions they trigger in C), but looks nice.
Thank you.
I've been gathering support from various hubs and clients during the past week or so. They are added to the Wiki entry when reported.
I would really like to see this go in. To be honest, it might require a bit more work in the hubsoft to really take advantage of the feature though. The bandwidth saving rises with the size of the compressed buffer. It does miracles for things like userlists....
I would love to see this feature. Even my 100Mbit connection is stressed when I connect to 10000+ hubs. :D
QuoteOriginally posted by bastya_elvtars
I would love to see this feature. Even my 100Mbit connection is stressed when I connect to 10000+ hubs. :D
While you might see it as a way of saving bandwidth with your leeching 8o (hehe, j/k), i see it as larger hubs. ;)
Just to let you know status of this. Arnetheduck rejected ZLine in it's current form, and suggested an alternative way of doing it. Jove has constructed another patch to do it this way, and this has been accepted into the DC++ cvs / svn. :)
http://dcpp.net/bugzilla/show_bug.cgi?id=834 For the patch. For any questions about how it works, just ask me (i gotta vague idea), or ask Jove (he knows much more, lol). :)
OK, so we can expect this in next DC++ release, and the support in PtokaX is being worked on, AFAIK. ;)
Quote from: bastya_elvtars on 27 February, 2006, 17:37:20
OK, so we can expect this in next DC++ release, and the support in PtokaX is being worked on, AFAIK. ;)
Yes, unfortuantelly it's (ZPipe) is different to how ZLine works, so clients / hubs which have already implemented ZLine will have to change it. :P
Yeah, arnetheduck is very namby-pamby. ::)
If he has hiccups, it's because latest DC++ crashed my PC 3 times in 8 hours, and now he buggers with such nonimportant stuff. Thumbs
up? down? what?
From patch it looks like ZPipe means to compress all data :o I am not sure if this is possible to do without high cpu usage :(
From what i understood, from questioning Jove on this, hub sends $ZOn| or sommit, and it opens up a compressed stream ?. Once data finished being sending down it, the stream is closed, and the client reverts to back to normal mode. If more compressed data is to be sent, hub sends $ZOn| again first to open up another compressed stream.
Does that make sense (i have no idea about sockets / streams and stuff). ?
ZPipe does indeed compress the data. It does not necessarily use a lot of cpu. It all depends on how your hub is structured. Aquila (http://aquila.berlios.de) creates generic buffers send regularly to all users (with searches for example) those buffers are compressed once and then send to all users. The cpu usage of the compression is negligable. Just compressing everything for each users individually is indeed prohibitive.
This is why ZPipe is structured as it is. You turn it on with $ZOn| and as soon as the compressed stream ends, you fall back to normal mode.
Ok, now i understand how it is works ;) And is easy to me to change from Zline to ZPipe 8)
Quote from: PPK on 28 February, 2006, 03:18:32
Ok, now i understand how it is works ;) And is easy to me to change from Zline to ZPipe 8)
Nice. Just a little side note . . . . DC++ sends
ZPipe0
in it's supports. Which i think is only change to that patch. :)
I think no matter what this will be a cpu hog on hubs with a large amount of users.. 5000 users sends a lot of searches; means that px will have to compress data often. However this will indeed be a good feature for a small hub on a small connection. Personally i think that making a new protocol would be much better if you want to save bandwidth and cpu. So instead of a string based protocol it should be a binary protocol. That could offer upto 40-50% bandwidth saves and even more on the cpu.
Example copy paste into notpad or a program that show all chars in same size:
The current NMDC proto:
1? ? ? ? ?2? ? ? ? ?3? ? ? ? ?4? ? ? ? ?5? ? ? ? ?6? ? ? ? ?7? ? ? ? ?8? ? ? ? ?9? ? ? ? ?10
1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
-----------------------------------------------------------------------------------------------------
$ConnectToMe nickname 127.0.0.1:1024|? ? ? ? ? ?<- bandwith used 37 bytes
A new binary proto:
1? ? ? ? ?2? ? ? ? ?3? ? ? ? ?4? ? ? ? ?5? ? ? ? ?6? ? ? ? ?7? ? ? ? ?8? ? ? ? ?9? ? ? ? ?10
1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890
-----------------------------------------------------------------------------------------------------
1nickname5IPPORT0? ? ? ? ? ? ? <- bandwidth used 17 bytes
0
char 10 = $ConnecToMe (takes 1 byte instead of 12 bytes)
char 5? = delimiter
char 0? = end of message (like "|")
IP = unsigned long (takes 4 bytes instead of upto 15 bytes)
PORT = unsigned short (takes 2 bytes instead of upto 4 bytes)
You must also remember that the hub must to compare the clients
data against strings like "$ConnectToMe" it takes much much more
resources then just checking the value of a single byte.
Can this also be used for the webserver?
ZPipe in PtokaX working... but it looks like DC++ 0.687 have bug in support and sometimes show parts of unzlibed data in chat ::)
Yes. Problem is fixed and patch send.
Btw, the ZPipe0 is a temporary support since ZPipe is still in testphase.
Ok, patch aplied and now it works without problems :)