Peer-to-Peer File Sharing

Network Security

Derrick
Rountree
, in


Security for Microsoft Windows System Administrators, 2011

Peer-to-Peer File Sharing

Peer-to-Peer File Sharing
systems are no longer simply a new fad technology. They accept go ingrained in our Cyberspace culture. You accept to remember that merely because Samantha is hosting a file that she says is a video of the Olympics, that doesn’t mean that it really is the Olympics. It could be some sort of Trojan or malware. Nowadays, many botnets are built using Peer-to-Peer File Sharing systems.

Near corporate organizations practise not apply Peer-to-Peer File Sharing systems for business concern purposes. So the easiest way to protect confronting abuse is to take steps to prevent their usage within your organization. You tin do this by blocking access to any external servers or services that are used to control the peer-to-peer software. Y’all can also internally block any ports that are used past peer-to-peer systems to talk to each other.

Read total chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B978159749594300003X

Peer-to-Peer Networks and File Sharing

Larry East.
Daniel
,
Lars E.
Daniel
, in


Digital Forensics for Legal Professionals, 2012

36.one

What is peer-to-peer file sharing?

The basic premise of peer-to-peer file-sharing networks is to allow people who desire to share files on their computer to freely connect with other persons of like mind without having to know anything virtually how the network operates or anything about other computers on the network.

Every computer in a file-sharing network can be both a client and a server, and the methods for connecting them together into one huge network are all handled by the file-sharing software.
Figure 36.2
shows an instance of what a file-sharing network looks similar.


Figure 36.2.
An example of a peer-to-peer network



Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597496438000365

Peer-to-Peer Integration

AnHai
Doan
, …
Zachary
Ives
, in


Principles of Data Integration, 2012

Bibliographic Notes

The emergence of of
peer-to-peer file sharing
systems inspired the data management research community to consider P2P architectures for data sharing. Some of the systems that were built in this spirit were


[74, 283, 309, 346, 433, 459, 544, 585]

74


283


309


346


433


459


544


585

. Connections have also been fabricated betwixt PDMSs and architectures for the Semantic Web

[ii, x, 287]

2


10


287

.

The language for describing peer mappings in PDMSs and the query reformulation algorithm are taken from [288], describing the Piazza PDMS. Optimization algorithms, including methods for ignoring redundant paths in the reformulation and the event of mapping limerick, are described in

[288, 541]

288


541

.

The trouble of composing schema mappings was initially introduced by Madhavan and Halevy [408]. In that newspaper, composition was defined to hold w.r.t. a class of
queries and several restricted cases where limerick tin be computed were identified. They besides showed that compositions of GLAV mappings may require an infinite number of formulas. The definition nosotros presented is based on the piece of work of Fagin et al. [218], and
Theorems 17.iv and 17.five
are likewise taken from there. Interestingly, Fagin et al. also showed that composition of GLAV formulas tin can ever be expressed as a finite number of
second-order
tuple-generating dependencies, where relation names are also variables. Nash et al. [452] nowadays more complexity results regarding composition, and in [75] Bernstein et al. depict a practical algorithm for implementing composition. A variant on the notion of composition is to merge many partial specifications of mappings, every bit is done in the work on the MapMerge operator [22].

The looser architectures nosotros described in
Section 17.six
are from

[2, 346, 459]

2


346


459

. In particular, the similarity-based reformulation is described in more detail in [459]. Mapping tables were introduced in [346], where the Hyperion PDMS is described. That paper considers several other interesting details concerning mapping tables. First, the paper considers two semantics for mapping tables – the open-world semantics and the airtight-world semantics. In the closed-earth semantics, the presence of a pair (X,Y) in the mapping table implies that
10
(or
Y) cannot be associated with any other value, while in the open up-world supposition they can. The semantics also differ on the meaning of a value
not
actualization in the mapping table. Second, the paper considers the problem of composing mapping tables and determining whether a gear up of mapping tables is consequent and shows that in general, the problem is NP-complete in the size of the mapping tables. Some other related idea is that of update coordination beyond parties with different levels of abstraction, studied in [368]. One ways of propagating such updates is via triggers [341].

The Orchestra organisation

[268, 544]

268


544


takes PDMSs one step further and focuses on
collaboration
amid multiple information owners. To facilitate collaboration, users need to be able to manage their data independently of others, propagate updates as necessary, runway the provenance of the information, and create and apply
trust
atmospheric condition on which data they would like to import locally. We touch on some of these problems in more particular in
Chapter 18.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B978012416044600017X

Unstructured Overlays

John F.
Buford
, …
Eng Keong
Lua
, in


P2P Networking and Applications, 2009

Freenet

Freenet was proposed by Ian Clarke43
in 1999 every bit a distributed peer-to-peer file-sharing mechanism featuring security, anonymity, and deniability. The Freenet pattern discussed hither is described in a paper published in 2000.44
Both objects
and peers take identifiers. Identifiers are created using the SHA-1 ane-way hash function. Peer identifiers are called
routing keys.
Each peer has a fixed-size routing table that stores links to other peers. Each entry contains the routing key of the peer. Freenet uses fundamental-based routing for inserting and retrieving objects in the mesh. Requests are forwarded to peers with the closest matching routing primal. If a request along one hop fails, the peer will try the next closest routing central in its routing tabular array. The routing algorithm (Figure 3.8A) is steepest-ascent hill climbing with backtracking until the asking TTL is exceeded. Consequently, depending on the organization of the links and the availability of peers, it is possible that requests could fail. Freenet counteracts this by caching objects along the return path both on lookup and insert requests. An object is stored at a peer until space is no longer available and it is the least recently used (LRU) object at that peer.


Figure 3.8.
Freenet distributed primal-based routing: (A) case routing path with backtracking on failure and



As shown in
Effigy 3.8B, functioning grows logarithmically until the network reaches about 250,000 nodes.

Freenet is an open up-source project. The evolution of the design of Freenet is described in [47].

FastFreenet45
is a proposed modification of the Freenet design to meliorate the request hit ratio past over half dozen times compared to regular Freenet routing. In Fast-Freenet, each peer shares a fuzzy description of the files that information technology has with its neighbors. When a query is received, a node tin can tell which neighbors are likely to have the data, and forwards the query accordingly. The fuzzy clarification is an
North-fleck number in which each scrap corresponds to one/N
segment of the key space. A one bit means that ane or more files in that part of the key infinite are stored at that peer. A 0 bit means that no files in that part of the key space are stored at that peer.

Freenet caches objects that have been returned in response to earlier queries, to increase the likelihood of a successful query response. When the cache is full, infinite is made for new object query results by removing the LRU entry in the cache. An alternate policy is to adopt objects that are clustered around keys of interest. Objects that are furthest from the clustering central are removed first from the cache when new query results are available. Inspired by the clustering property of the minor-world model, Zhang, Goel, and Govindan46
show that this caching policy significantly improves Freenet’s query striking rate. Since the cache policy is purely a local conclusion, no change to the Freenet protocol is needed to implement information technology.

The primal clustering mechanism works as follows. Each node randomly selects a seed key that it uses to class the primal cluster in its cache. When the cache is full, the key furthest from the seed key is removed. This is chosen
strict enhanced clustering. A variation of this is called
enhanced clustering, in which some cache entries far
from the seed key are randomly retained in the enshroud.
Figure three.9A
compares the two key clustering enshroud schemes with LRU. As the number of objects in the overlay increases, the clustering policies provide a substantial improvement to the hitting ratio compared to LRU. For successful requests, the average number of hops using LRU is somewhat better than enhanced clustering and significantly better than strict enhanced clustering (Figure three.9B).


Figure 3.9.
Impact of replacing Freenet’s LRU cache replacement policy with key-clustering cache policies (A) hit ratio versus load and (B) boilerplate number of hops versus load.


Reprinted from
46© 2004, with permission from Elsevier.


Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123742148000039

Signature-Based Detection with Snort and Suricata

Chris
Sanders
,
Jason
Smith
, in


Practical Network Security Monitoring, 2014

Altitude and Within

As we saw before, rules tin can exist written and then that they contain multiple content matches. When working with a rule like this, information technology can be incredibly useful to exist able to specify how the content matches are positioned relative to each other. Ane manner to practise this is the distance rule modifier, which is used to specify the distance from the terminate of the previous content match to start the next content check.

The following rule makes use of the distance modifier:

alert tcp $HOME_NET 1024: – 
>
$EXTERNAL_NET 1024: (msg:“ET P2P Ares Server Connexion”; flow:established,to_

server; dsize:<70; content:“r|be|bloop|00|dV”; content:“Ares|00 0a|”; altitude:16; reference:url,aresgalaxy

.sourceforge.net; reference:url,doc.emergingthreats.net/bin/view/Main/2008591; classtype:policy-violation;

sid:2008591; rev:3;)

The rule shown to a higher place is used to detect activeness related to the Ares
peer-to-peer file sharing
network.

1.

content:”r|be|bloop|00|dV”;

Match content occurring within whatever indicate of a package payload

ii.

content:”Ares|00 0a|”; distance:16;

Match content starting at least 16 bytes after the previous content match., counting from 1.

The following packet payload will generate an alert from this dominion:

0x0000: 72be 626c 6f6f 7000 6456 0000 0000 0000 r.bloop.dV…….

0x0010: 0000 0000 0000 0000 0000 0000 0000 0000 ……………..

0x0020: 4172 6573 000a Ares..

Yet, this payload will non match the rule, considering the second content match does not occur at to the lowest degree 16 bytes after the first match:

0x0000: 72be 626c 6f6f 7000 6456 0000 0000 0000 r.bloop.dV…….

0x0010: 4172 6573 000a 0000 0000 0000 0000 0000 Ares………….

From the Trenches

A common misconception is that Snort or Suricata will look for content matches in the society they are listed within the rule. For example, if the rule states “content:one; content:two;”, that the IDS engine would look for those content matches in that society. Even so, this isn’t the case, and this rule would match on a packet whose payload contains “onetwo” or “twoone”. To ensure that in that location is an order to these matches, you lot can pair them with a distance modifier of 0. This tells the IDS engine that the 2nd content lucifer should come after the beginning, but the distance between the matches doesn’t matter. Therefore, nosotros could amend the following content matches to be “content:one; content:two; distance:0;”. This would match on “onetwo” but not on “twoone”.

Some other rule modifier that can be used to dictate how multiple content matches relate to each other is the within modifier. This modifier specifies the number of bytes from the finish of the first content lucifer that the 2d content match must occur inside. The following dominion combines both the altitude and within modifiers with multiple content matches:

alert tcp $HOME_NET whatsoever – 
>
$EXTERNAL_NET 3724 (msg:“ET GAMES World of Warcraft connection”; flow:established,to_server; content:“|00|”; depth:1; content:“|25 00|WoW|00|”; distance:one; within:7; reference:url,doc.emergingthreats.net/bin/view/Main/2002138; classtype:policy-violation; sid:2002138; rev:nine;)

This rule is designed to detect connections to the online Globe of Warcraft game by detecting two content matches occurring in the correct society:

1.

content:”|00|”; depth:1;

Match content occurring on the first or second byte of the packet payload.

ii.

content:”|25 00|WoW|00|”; distance:1; inside:vii;

Get-go matching content one byte afterward the end of the previous content match, ending by the seventh byte.

Considering these criteria, the following packet payload would generate an alarm from this rule:

0x0000: 0000 2500 576f 5700 0000 0000 0000 0000 …WoW………..

0x0010: 0000 0000 0000 0000 0000 0000 0000 0000 ……………..

The following would not generate an alarm, considering the 2nd content match falls outside of the values specified past the distance and within modifiers:

0x0000: 0000 0000 0000 0000 2500 576f 5700 0000 ……….WoW….

0x0010: 0000 0000 0000 0000 0000 0000 0000 0000 ……………..

Read total chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B978012417208100009X

Introduction

John F.
Buford
, …
Eng Keong
Lua
, in


P2P Networking and Applications, 2009

The Ascent of P2P File-Sharing Applications

Virtually 10 years after the World Wide Web became bachelor for use on the Cyberspace, decentralized peer-to-peer file-sharing applications supplanted the server-based Napster application, which had popularized the concept of file sharing. Napster’s centralized directories were its Achilles’ heel because, as it was argued in court, Napster had the means, through its servers, to detect and prevent registration of copyrighted content in its service, but it failed to practice and then. Napster was subsequently found liable for copyright infringement, dealing a lethal accident to its business organisation model.

Every bit Napster was consumed in legal challenges, second-generation protocols such as Gnutella, FastTrack, and BitTorrent adopted a peer-to-peer compages in which there is no fundamental directory and all file searches and transfers are distributed among the respective peers. Other systems such every bit FreeNet too incorporated mechanisms for client anonymity, including routing requests indirectly through other clients and encrypting letters betwixt peers. Meanwhile, the summit labels in the music manufacture, which take had arguably the most serious revenue loss due to the emergence of file sharing, accept continued to pursue legal challenges to these systems and their users.

Regardless of the result of these courtroom cases, the social perception of the acceptability and benefits of content distribution through P2P applications has
been irrevocably altered. In the music industry prior to P2P file sharing, audio CDs were the ascendant distribution mechanism. Spider web portals for online music were limited in terms of the size of their catalogs, and downloads were expensive. Although P2P file sharing became widely equated with content piracy, it besides showed that consumers were ready to replace the CD distribution model with an online experience if it could provide a big portfolio of titles and artists and if information technology included features such equally a search, previews, transfer to CD and personal music players, and individual rail purchase. Every bit portals such every bit iTunes emerged with these properties, a tremendous growth in the online music business concern resulted.

In a typical P2P file-sharing awarding, a user has digital media files he or she wants to share with others. These files are registered past the user using the local application according to properties such every bit title, artist, date, and format. Subsequently, other users anywhere on the Internet tin search for these media files past providing a query in terms of some combination of the aforementioned attributes. As nosotros talk over in detail in after chapters, the query is sent to other online peers in the network. A peer that has local media files matching the query will render information on how to retrieve the files. It may also frontward the query to other peers. Users may receive multiple successful responses to their query and tin so select the files they want to call back. The files are then downloaded from the remote peer to the local machine. Examples of file-sharing customer user interfaces are shown in
Figures 1.i and i.2.


Effigy 1.1.
LimeWire client.




Figure 1.2.
eMule client search interface.



Despite their popularity, P2P file-sharing systems have been plagued past several problems for users. First, some of the providers of leading P2P applications earn revenue from tertiary parties by embedding spyware and malware into the applications. Users then discover their computers infected with such software immediately after installing the P2P application. Second, a large amount of polluted or corrupted content has been published in file-sharing systems, and it is hard for a user to distinguish such content from the original digital content they seek. Information technology is more often than not felt that pollution attacks on file-sharing systems are intended to discourage the distribution of copyrighted cloth. A user downloading a polluted music file might find, for instance, noise, gaps, and abbreviated content.

A third type of problem affecting the usability of P2P file-sharing applications is the
free-rider trouble. A free rider is a peer that uses the file-sharing awarding to access content from others but does not contribute content to the same caste to the community of peers. Various techniques for addressing the complimentary-rider problem by offering incentives or monitoring use are discussed afterward in the volume. A related issue is that of
peer churn. A peer’s content can simply be accessed by other peers if that peer is online. When a peer goes offline, it takes time for other peers to be alerted to the change in status. Meanwhile, content queries may go unanswered and time out.

The leading P2P file-sharing systems have not adopted mechanisms to protect licensed content or collect payment for transfers on behalf of copyright owners. Several ventures seek to legitimize P2P file sharing for licensed content by incorporating techniques for
digital rights direction
(DRM) and superdistribution into P2P distribution architectures. In such systems, content is encrypted, and though it can be freely distributed, a user must separately purchase an encrypted license file to render the media. Through the apply of digital signatures, such license files are not easily transferred to other users. Run across this book’s Website for links to current P2P file-sharing proposals for DRM-based approaches.

Other ventures such as QTrax, SpiralFrog, and TurnItUp are proposing an advertisement-based model for free music distribution. The user can freely download the music file, which in some models is protected with DRM, but must listen to or watch an ad during download or playback. In these schemes, the advertiser instead of the user is paying the content licensing costs. Questions remain nearly this model, such every bit whether it will undercut existing music download business models and whether the advertising revenue is sufficient to friction match the licensing revenue from existing music download sites.

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780123742148000015

The Ever Changing Technical Landscape

John G.
Iannarelli
,
Michael
O’Shaughnessy
, in


Information Governance and Security, 2015

Social Media

The first e-mail was sent in 1971. The beginning IRC (Internet chat) was used in 1988. Napster—a peer-to-peer file-sharing service—started in 1999. And 2003 saw the advent of many social networking and bookmarking sites. Today, Facebook, LinkedIn, Tumblr, Instagram, and other shared sites are everywhere and are a big office of many people’south lives. Businesses are congenital on and for social networking. It has fundamentally inverse the mode nosotros communicate and how we admission information.

This modify has arguably been for the better, but it does come up with run a risk. The use of such sites is and then prevalent that near users just ignore or are unaware of the risks that are inherent in a system that allows people to share personal information with others so easily. Businesses that fail to utilize social networking every bit role their marketing and PR are at disadvantage, and those that fail to manage social networking are at a decided risk for being manipulated and taken advantage of. Employees and management need to be aware of the risks and trained to identify and address those risks.

So how practice organizations get taken reward of through social networking? Oftentimes, it happens through the release or publicizing of personal or proprietary information through a person’s innocently intended use of social media. Posting job changes on Facebook or LinkedIn are well-known means that information that should stay protected gets released. About people are aware of this danger, simply it does not accept much to become caught upwards in the moment and type the incorrect message. In one case information technology gets published, information technology is out there for the world to see. Fifty-fifty if it gets deleted, it could accept been copied and saved while information technology was still posted.

Information governance is responsible for making organizations at all levels meliorate understand the bug. Past establishing articulate rules regarding the use of personal sites at work and clearly educating employees about why these rules are in identify, risks can be mitigated. The risks cannot be removed entirely, however. They can just exist minimized. People even so make mistakes, and those with nefarious intent are becoming more and more savvy. Education, awareness, and constant messaging are fundamental to this facet of information governance.

Additionally, an information governance programme will clearly define levels of admission. For example, a receptionist should non automatically have access to the personal data of clients and employees, considering such access is not necessary to the job responsibilities of a receptionist. Keeping everyone “in their lane” is a very simple and straightforward way to minimize risks and limit exposure. Allowing everyone to admission everything is dangerous and irresponsible.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780128002476000030

Social Media

Ric
Messier
, in


Collaboration with Deject Computing, 2014

Friendster

Friendster was a very early social networking site, adult in 2002 by a Canadian programmer. The name came from putting the discussion friend together with Napster, which was a peer-to-peer file-sharing network popular at the time. Friendster was a way of developing online circles of friends, providing a manner for people to connect with ane another and also extending the circle of people they know. In the showtime few months, Friendster had several one thousand thousand users, proving once once more the interest people have in reaching out and connecting to ane another, even through applied science. Friendster’s popularity inspired the launch of Dogster, which was a like idea for dogs to have circles of their own friends.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B978012417040700006X

Organization and network assessments

Leighton
Johnson
, in


Security Controls Evaluation, Testing, and Assessment Handbook (Second Edition), 2020

Network Scanning (sniffing)

“Network sniffing is a passive technique that monitors network communication, decodes protocols, and examines headers and payloads to flag information of interest. As well existence used as a review technique, network sniffing can also be used as a target identification and assay technique. Reasons for using network sniffing include the following:

Capturing and replaying network traffic

Performing passive network discovery (e.g., identifying active devices on the network)

Identifying operating systems, applications, services, and protocols, including unsecured (east.g., telnet) and unauthorized (east.chiliad.,peer-to-peer file sharing) protocols

Identifying unauthorized and inappropriate activities, such as the unencrypted transmission of sensitive information

Collecting information, such as unencrypted usernames and passwords.

Network sniffing has little touch on on systems and networks, with the nigh noticeable impact being on bandwidth or calculating power utilization. The sniffer—the tool used to conduct network sniffing—requires a ways to connect to the network, such as a hub, tap, or switch with port spanning. Port spanning is the process of copying the traffic transmitted on all other ports to the port where the sniffer is installed. Organizations tin deploy network sniffers in a number of locations within an surround. These commonly include the following:

At the perimeter, to assess traffic entering and exiting the network

Behind firewalls, to appraise that rule-sets are accurately filtering traffic

Behind IDSs/IPSs, to determine if signatures are triggering and being responded to appropriately

In front of a critical system or application to appraise action

On a specific network segment, to validate encrypted protocols.

1 limitation to network sniffing is the use of encryption. Many attackers take advantage of encryption to hibernate their activities—while assessors can see that advice is taking place, they are unable to view the contents. Another limitation is that a network sniffer is merely able to sniff the traffic of the local segment where it is installed. This requires the assessor to motion it from segment to segment, install multiple sniffers throughout the
network, and/or utilize port spanning. Assessors may also find it challenging to locate an open physical network port for scanning on each segment. In add-on, network sniffing is a adequately labor-intensive activity that requires a high degree of human involvement to interpret network traffic.”
10

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128184271000100

ISA 2004 Stateful Inspection and Awarding Layer Filtering


Dr.
Thomas Due west.
Shinder
,
Debra Littlejohn
Shinder
, in


Dr. Tom Shinder’s Configuring ISA Server 2004, 2005

Investigating HTTP Headers for Potentially Dangerous Applications

One of your primary tasks as an ISA firewall administrator is to investigate characteristics of network traffic with the goal of blocking new and ever more unsafe network applications. These dangerous applications might be peer-to-peer applications, instant messaging applications, or other applications that hibernate past wrapping themselves in an HTTP header. Many vendors now wrap their applications in an HTTP header in an attempt to subvert your Firewall policy. Your goal every bit an ISA firewall administrator is to subvert the vendors’ endeavour to subvert your Network Usage policy.

As you can imagine, the vendors of these applications aren’t very cooperative when information technology comes to getting information on how to forestall their applications from violating your firewall security. You’ll frequently have to effigy out this information for yourself, especially for new and obscure applications.

Your master tool in fighting the state of war confronting network scumware is a protocol analyzer. Two of the most popular protocol analyzers are Microsoft Network Monitor and the freeware tool Ethereal. Both are excellent, the simply major downside of Ethereal being that yous demand to install a network driver to arrive work correctly. Since the WinPcaP driver required by Ethereal hasn’t been regression tested confronting the ISA firewall software, it’south hard to know whether it may crusade problems with firewall stability or functioning. For this reason, we’ll use the built-in version of Network Monitor included with Windows Server 2003 in the following examples.

Let’s look at a couple of examples of how yous tin determine how to block some unsafe applications. 1 such application is eDonkey, a peer-to-peer file-sharing awarding. The first step is to outset Network Monitor and run a network monitor trace while running the eDonkey awarding on a customer that accesses the Cyberspace through the ISA firewall. The best way to outset is past configuring Network Monitor to heed on the Internal interface of the ISA firewall, or whatever interface eDonkey or other offending applications use to admission the Cyberspace through the ISA firewall.

Stop the trace after running the offending awarding for a while. Since we’re only interested in Web connections moving through TCP port lxxx, we can filter out all other communications in the trace. We can do this with Brandish filters.

Click the
Display
menu and then click the
Filter
command. In the
Display Filter
dialog box, double-click the
Protocol == Whatever
entry. (See
Figure x.30.)


Effigy 10.30.
The Brandish Filter Dialog Box



In the
Expression
dialog box, click the
Protocol
tab and and then click the
Disable All
button. In the list of
Disabled Protocols, click the
HTTP
protocol, click the
Enable
button, and and then click
OK. (See
Figure 10.31.)


Figure 10.31.
The Expression Dialog Box



Click
OK
in the
Display Filter
dialog box. The height pane of the Network Monitor console now only displays HTTP connections. A good place to start is past looking at the
GET
requests, which appear every bit
GET Request from Client
in the
Description
column. Double-click on the Get requests and aggrandize the
HTTP: Become Request from Client
entry in the center form. This displays a list of request headers.

In
Figure 10.32, you can see that one of the request headers appears to be unusual (only if you have experience looking at Network Monitor traces; don’t worry, it won’t have long before you lot get good at this). The
HTTP: User-Agent =ed2k
seems like information technology might be specific for eDonkey2000. Nosotros tin use this information to create an HTTP Security Filter entry to block the
User-Amanuensis
Request Header value
ed2k.


Figure 10.32.
The Network Monitor Display Window



Yous can do this past creating an HTTP Security Filter signature using these values.
Effigy ten.33
shows what the HTTP Security Filter signature would look like to block the eDonkey awarding.


Figure 10.33.
The Signature Dialog Box



Another example of a dangerous awarding is Kazaa.
Figure x.34
shows a frame displaying the Go request the Kazaa customer sends through the ISA firewall. In the listing of HTTP headers, you can come across one that can be used to help cake the Kazaa client. The
P2P-Amanuensis
HTTP request header can be blocked completely, or you tin create a signature and block the
P2P-Agent
HTTP request header when it has the value
Kazaa. Yous could also block the
Host
header in the HTTP asking header when the value is set as

desktop.kazaa.com
.


Effigy 10.34.
Network Monitor Display Showing Kazaa Asking Headers



Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9781931836197500174