by Guest on 2018/07/13 08:47:49 AM
now, here is a challange for the dev folks:
the scenario: a group of people who share the same hd over a network are downloading the same file(s) to their common destination folder.
the problem: the file allocation, hashing, checking and scanning processes of the different instances of fopnu used by the different users collide with each other, resulting in various error messages, and the file(s) are being downloaded on each instance anew, despite even resuming manually a file that has been started by an other instance/user from the common folder (which is included in the users' libraries, visible to this group of users only).
why tackle this?: * it will significantly diminish the amount of time for each file downloaded in this manner to finish, thus making them available for sharing much sooner. * this will allow people (say, users of a certain interest group) to build shared topic libraries without the need of having to dl/ul from/to each other the files they wish to include in the shared library (imagine a scenario of even just 10 users and a 1000 files).
probability of problem solution: likely, since fopnu is a multiloader anyway and has internal mechanisms to deal with data packets coming in from different sources. (as a non-coder i guess that the file allocation and sharing that allocation info between instances of fopnu seems to be the main problem.)