@dkastl went to a conference at a university campus and he had to sign a paper stating that "by accessing this network you will not use any p2p software such as WinMX" (we have to google what software that is).
Of course, DAT didn't run on the campus. To my knowledge UDP hole-punching is not yet implemented in DAT, it would practically be p2p file sharing: violating his contract.
Now: This means nobody at the university campus can use DAT, and we have been wondering what reasons the university could have to prevent this.
What came to mind were following theories:
- UDP packets can not be filtered as easily as tcp packets. If you have network admins that filter file traffic on a tcp-packet basis, then they will be pissed if you try to work around them.
- There is also no clear distinction in UDP packets between a response and a command. TCP packages can be sort-of analysed for the content and if they might contain malicious content.
- An open port on a computer can mean that they can receive a request at any time. Which could also possibly be used by hackers to remote control a computer inside the network.
- Most probably though is that they want to prevent p2p processes to slow down their network by consuming all the bandwidth.
Now, DAT is by no means different from other solutions like Dropbox or Google Drive. but in none of them do I get asked to open a port or the-like because they use http APIs and alike as fallbacks.
There has been the request for a DAT-Gateway to serve the data and work-around this particular problem.
However I would like to propose a different approach: Specify a https protocol to Download AND Share data with a DAT-2-HTTP bridge server.
I imagine the download process to work analogous to the DAT protocol over tcp
<gateway>/<dat-discovery-key>/login
will return a challenge to the client if another peer is in the network, else it will return 404 - as no peer is available.
Next the client uses the public-dat-key to "login" to a swarm. The server
asks another peer to verify it, upon verification the server stores the
public-dat-key for a few hours together with a session-key that is used for
future transactions.
With the session-key its possible to download the content.
<gateway>/<dat-discovery-key>/get/content
<gateway>/<dat-discovery-key>/get/feed
Will return the data; Of course with support for ranges (but only in the blocksize of hypercore) The server will then on-demand download the range from the p2p network and deliver it to the client (maybe even caching the data on the server).
This "get" server-architecture should allow for fairly easy clustering.
And about distributing a DAT? The server could of course provide the cached data. but the client could also call:
<gateway>/<dat-discovery-key>/wants
to get a list of ranges that might be wanted by any peer in the network, which the client could then use to push some ranges to:
<gateway>/<dat-discovery-key>/push
Again, the server wouldn't need to cache the data, just see which peers would like that section and transfer it to those peers.
With this sort-of bridge we could have people participating in the DAT network without really being on it?! It would solve our problem.
I mentioned in the title for this issue that its focusing on https. I initially thought http2 might be a good idea, but probably the network infrastructure for this isn't ready either.
@dkastl went to a conference at a university campus and he had to sign a paper stating that "by accessing this network you will not use any p2p software such as WinMX" (we have to google what software that is).
Of course, DAT didn't run on the campus. To my knowledge UDP hole-punching is not yet implemented in DAT, it would practically be p2p file sharing: violating his contract.
Now: This means nobody at the university campus can use DAT, and we have been wondering what reasons the university could have to prevent this.
What came to mind were following theories:
Now, DAT is by no means different from other solutions like Dropbox or Google Drive. but in none of them do I get asked to open a port or the-like because they use http APIs and alike as fallbacks.
There has been the request for a DAT-Gateway to serve the data and work-around this particular problem.
However I would like to propose a different approach: Specify a https protocol to Download AND Share data with a DAT-2-HTTP bridge server.
I imagine the download process to work analogous to the DAT protocol over tcp
<gateway>/<dat-discovery-key>/loginwill return a challenge to the client if another peer is in the network, else it will return 404 - as no peer is available.
Next the client uses the public-dat-key to "login" to a swarm. The server
asks another peer to verify it, upon verification the server stores the
public-dat-key for a few hours together with a session-key that is used for
future transactions.
With the session-key its possible to download the content.
<gateway>/<dat-discovery-key>/get/content<gateway>/<dat-discovery-key>/get/feedWill return the data; Of course with support for ranges (but only in the blocksize of hypercore) The server will then on-demand download the range from the p2p network and deliver it to the client (maybe even caching the data on the server).
This "get" server-architecture should allow for fairly easy clustering.
And about distributing a DAT? The server could of course provide the cached data. but the client could also call:
<gateway>/<dat-discovery-key>/wantsto get a list of ranges that might be wanted by any peer in the network, which the client could then use to
pushsome ranges to:<gateway>/<dat-discovery-key>/pushAgain, the server wouldn't need to cache the data, just see which peers would like that section and transfer it to those peers.
With this sort-of bridge we could have people participating in the DAT network without really being on it?! It would solve our problem.
I mentioned in the title for this issue that its focusing on https. I initially thought http2 might be a good idea, but probably the network infrastructure for this isn't ready either.