Você está na página 1de 5

Kedar Nath Singh et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No.

6, Issue No. 1, 060 - 064

Attacks on Trust and Reputation System & its Defensive methods in Semantic Web
Department of CSE Ambedkar Institute of Technology New Delhi, India knsinghait@gmail.com
AbstractSemantic Web grants individual to act under uncertainty and with the risk of negative consequences. It has an inherent constituent called trust. The Semantic Web, conceived as a collection of agents, will therefore function more effectively when trust is licensed. Trust is essential for agents collaboration; each agent will have to make subjective trust judgments about other agents with respect to the services they claim to be able to supply. In this paper, the trust strategies and approaches for multi-agents system for semantic web are described. Moreover, how these trust strategies can become malicious through an attacker is discussed and their defensive methods to handle such attack on an agent are presented. Keywords- Trust; Reputation; Multi-Agent system; Attacks; Semantic Web.

Kedar Nath Singh

Department of CSE Ambedkar Institute of Technology New Delhi, India sureshpoonia@yahoo.com An agent can be trustworthy or not is measured by its reputation among other agents and performance in providing services. These can be calculated by a trust reputation system [2, 3, 4] or through a trusted third party which is reliable. This trust is an important factor in terms of cooperation among agents. Maintaining the trust and handling it are two different processes. Maintaining the trust among agents depend on the agent itself and handling this trust factor is the work of a reputation system. Moreover, an agent can get compromised by an attacker which in turn disturbs the process of the communication and can even alter, delete and access the confidential information on the web. This breach in security by an attacker causes several problems among agents and executes attacks on other uncompromised agents. Such attack can be in the form of a newcomer attack, unfair rating, Bad mouthing, Sybil attack [5,6] or denial of service attacks [7]etc. Several defensive methods and strategies have been proposed to handle such attacks which provide appropriate protection against them. In this paper, section II describes the trust strategies and approaches of trust management. Section III covers the attacks on the agent which affects the trust factor. Defense strategy to handle such attacks is given in section IV. Conclusion is given in section V.

Suresh Kumar

I. INTRODUCTION The Semantic Web, which retrieves the meaningful information on searching the web, consists of several agents. This web is an advanced version of the current World Wide Web (WWW) [1]. These agents are the automated software which can be designed on the basis of requirement. As pointed out in [1], establishment of trust among agents is essential for agents collaboration. Each agent should trust each other in order to provide the services they claim. This trust factor plays a crucial role and depends on number of factors like services offered and quality etc.

IJ
ISSN: 2230-7818

Recent research has shown a tremendous growth in field of providing trust among agents and thereby increasing the security of communication which takes place through semantic web. A minimum security is always mandatory between two communicating parties so that a reliable and secured communication can take place. The trust factor depends on the type of the communicating party whether it is trustworthy or not. The risk involved in communicating with an untrustworthy party is much higher than the risk involved with a trustworthy party. Trust and Risk goes hand in hand. More the trustworthy party is, less is the risk involved in communication. In other words, more risk is involved in communicating with an untrustworthy party. This relation of trust, risk and semantic web offers a convenient way to exchange the information on web among users and also provides security to the information on the web and the resources.

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

ES
2) Pessimism

II. TRUST STRATEGIES AND APPROACHES Given below are the strategies which are used in establishing trust among agents in semantic web. A) Strategies Number of strategies is there to deal with trust and its establishment among agents. In general, five basic strategies are defined for trust [23] which are concluded below. 1) Optimism This is a basic simple strategy for trust in which an agent initially trusts all other agent irrespective of its performance and services offered. However, there must be strong reasons for not trusting another agent in the community. This kind of trust is considered as the default nature of an agent. Therefore, an agent trusts all other agent till the time any other or all agent fails the test.

Page 60

Kedar Nath Singh et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 6, Issue No. 1, 060 - 064

The strategy defined for trust is pessimism in which an agent does not trust any other agent in the community. An agent can trust other agent when it finds strong reason to trust them. Such approach resists the communication among agents. This strategy reciprocates the concept followed by optimistic strategy. Pessimism represents trust that is based on personal acquaintance in the offline world, which is the basic model of trust (local trust [24]). 3) Centralization: The third approach of trust is called Centralization which involves the third party on which all other agents trust. This third party should be reliable and trusted by all the agents in the community. Third party may use a certification system in which it can issue a certificate to other agents which can be trusted for services. These services may vary from agent to agent and specified on the certificates too. Every agent is considered trusted if it carries a valid certificate. The trust is based on the certificates issued by a trusted third party. Certificates can even be revoked on finding any malicious behavior of an agent and can be renewed if its validity is about to end. Some institution merely holds the relevant collected information for users. Example of such institution is TRELLIS system [25]. 4) Investigation: Investigation strategy involves monitoring and evaluation of other agent. On the basis of these operations, an agent discovers the salient details of others operation and thereby trust is managed and maintained. This kind of trust is an active trust which considers other trustworthy on the basis of their evaluation. Therefore, it reduces the uncertainty level among each other.

2) Reputation based Trust Management: Reputation-based trust relies on a soft computational approach to the problem of trust. In this case, trust is typically computed from local experiences together with the feedback given by other entities in the network (e.g., users who have used services of that provider). C) Design Issues of Trust Establishment Method A trust is always negotiating between two parties for a specific action. Trust is related to a specific service. Different trust relationships appear in different business contexts. The measurement may be absolute (e.g. probability) or relative (e.g. dense order). For each trust relationship, one or multiple numerical values, referred to as trust values, describe the level of trustworthiness. Mainly there are two common methods to establishing trust directly and indirectly. Direct trust is established when one party can directly observe the second partys behavior whereas indirect trust is established when first party calculate the trust about other party on the basis of recommendations from other entities. Direct trust is established through observations on whether the previous interactions between the two parties (two agents) are successful or failed. The observation is often described by two variables: s and f. Where, s denotes the number of successful interactions, and f, denotes the number of failed interactions. In - function based method [8], the direct trust value is calculated as

5) Transitive This kind of trust involves direct as well as indirect trust. If an agent trusts another agent then the trustor agent trusts all other agent on which trustee agent relies. Social network analysis techniques like Friend of a Friend (FOAF) [26] are used to measure trust over a network, extended with trust relations. A similar approach, where users provide trust values for a number of other users, exploring webs of trust [4], are used to measure trust.

IJ
ISSN: 2230-7818

B) Approches of Trust Management There exist currently two different major approaches for managing trust: policy-based and reputation-based trust management. The two approaches have been developed within the context of different environments and targeting different requirements. 1) Policy based Trust Management: Policy-based trust relies on objective strong security mechanisms such as signed certificates and trusted certification authorities (CA hereafter) in order to regulate the access of users to services. Moreover, the access decision is usually based on mechanisms with well defined semantics (e.g., logic programming) providing strong verification and analysis support. The result of such a policy-based trust management approach usually consists of a binary decision according to which the requester is trusted or not, and thus the service (or resource) is allowed or denied.

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

ES
III. B) Unfair Ratings:

Recommendation trust is a type of direct trust. One party is a judge whether a recommendations provided by entities are correct or not. Using - function, recommendation trust is calculated as

Where and are the no of good and bad recommendations received from entities, respectively. ATTACKS ON AGENTS AFFECTING TRUST

In this section, attacks on agent in semantic web and its affect on the trust factor are examined which are as follows. A) An individual Malicious Agent: It is the most common attack on trust and reputation system for the semantic web. Malicious behavior of an agent is dangerous because it always provided bad services. A malicious agent also falsely counts the reputation of a true agent in multi-agent system.

Page 61

Kedar Nath Singh et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 6, Issue No. 1, 060 - 064

As long as recommendations are taken into consideration, malicious agents can provide dishonest recommendations to boost trust values of malicious agents [11]. This attack is also referred to as the bad mouthing attack [12]. This problem is challenging especially when the number of honest ratings is relatively small and unfair ratings can contribute to a significant portion of the overall ratings. C) Playbook Attack: In this attack, an agent or a group of agent initially behaves well for a particular period and provides all the services to the user to maintain a good reputation among users. On the basis of that maintained high reputation, the agent can behave in an opposite manner and provides degraded services to the user. In other words, an agent maintains a book of all the possible plays (attacks) and the corresponding time at which that attack gets executed. The important thing which is considered in order to maximize the attack effect is that to choose the play and its execution time. [21] Moreover, this changed behaviour of agent is not considered unethical all the time as the reputation is dynamic in nature. All the agents in semantic web monitor each others performance and maximize fitness. D) Reputation Lag Attack Reputation Lag attack occurs when an agent maintains a high reputation for a period of time and uses this reputation to cheat for a particular period. Then that agent terminates the account from which it executed the attack and reopens a fresh new account. For an example, an agent maintains a good reputation for one month and then uses this one month reputation for 15 days to cheat. These 15 days is the lag time over which an agent cheats. This attack involves dual behaviour of an agent towards its user at the same time. That is, an agent provides high quality services to a user but at the same time it provides low services to other user. But it is not mandatory that the agent continues to provide the same high and low quality service to its user. Like in above example, an agent may reciprocate its behaviour any time from high quality service to low quality and viceversa. An agent provides high quality services to a priori trusted user and low quality service to an unknown user .

In such an attack, an agent pretends that it can offer multiple services to its user but in actual, that agent has one service. It can be understood as an agent provides same service to its user but represents that as multiple services to its users. For an example, an agent can only determine a trust factor of a web page but it represents this service in a changed manner to other users. H) Re-entry or Newcomer attack: Newcomer attack is defined as the attack in which an agent leaves the community when it is reached to a state of low reputation. Thereafter, again joins the community as a newcomer agent and tries to maintain the reputation in the community. This re-entry of an agent into the community helps in eradicating the reputation maintained in past. This attack is considered unethical in almost all scenarios.

I)

On-Off Attack :

IJ
F) Malicious Consortium: G) Proliferation attack:
ISSN: 2230-7818

E) Discrimination:

Malicious agents form a malicious consortium and always falsely rate the trust value to other malicious agents in the multi-agent system. They maximize the trust level of malicious agents in the consortium. Identification of such group of agent is very difficult. In other words, an agent or a group of agent work together and maximizes their effect of attacking others. In such scenario, all agents with the motive of attack trust each other and promote them.

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

ES
J) Sybil attacks:

On-Off attack refers to the grey behaviour of an agent in which an agent behaves well for some time and then exhibits bad behaviour. This change in behaviour affects the trust factor among agents and exploits dynamic property of trust. Moreover, agents under such attack can even stop for a while to get escaped from detection [28]. Sybil attack is defined when an agent exhibits multiple fake identities to its users. Each time, one of the agents is selected as a service provider. It provides a bad service, after which it is disconnected and replaced with a new agent identity.

K) Man in the middle Attack: Man in the middle attack occurs when the malicious agent intercepts messages sent from service provider to intended recipient agent. The attacker (malicious agent) then changes message and send it to the original recipient agent. The recipient agent receives the message, sees that it came from service provider and acts on it. When the recipient agent sends a message back to service provider agent, the attacker intercepts it, alters it, and returns it to service provider agent.

T
Fig.1 Sybil attack

The re-entry can be vulnerability when an agent masquerades as other agent which is already present in the community. This causes the problem among other agent and user which are incapable of detecting the genuine agent.

Page 62

Kedar Nath Singh et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 6, Issue No. 1, 060 - 064

Service provider agent and recipient agent never know that they have been attacked. L) Denial of Services: The attack which resists its users from availing any services is called Denial of Service. The agent denies its user from giving services when it is under attack. Centralized systems are more vulnerable towards denial of service attack as the attacker can overload the system by sending fake data (or updates) and request which in turn over burdens the system. This results in denial of service. IV. DEFENSIVE METHODS

Newcomers attack is often quite challenging, monitoring each agents historical behavior is often not sufficient to encounter the Sybil because Sybil agent can behave well initially, and then launch an attack but on authentication and access control, which makes registering a new or faked ID difficult. A number of mechanisms [13, 14, 15, 16] have been proposed to prevent Sybil identities from artificially boosting the attackers reputation. E) Defense for Man in the Middle Attack: Use of cryptography: If agent encrypts the data before transmitting it, the attacker can still intercept it but cannot read it or alter it. If the attacker cannot read it, he or she cannot know which parts to alter. If the attacker blindly modifies encrypted message, then the original recipient is unable to successfully decrypt it and as a result, the receiver knows that it has been tampered. Use Hashed Message Authentication Codes (HMACs): If an attacker alters the message, the recalculation of the HMAC at the recipient fails and the data can be rejected as invalid. F) Defense for Denial of Services: There are several factors that make semantic web applications vulnerable to DoS attacks [22]. Mechanisms to prevent denial of service against the dissemination depend on the structure used for storage and dissemination of reputation values. TrustMe [20] use randomization techniques to mitigate the power of malicious collectives. But when participants are randomly selected then it cannot control it effectively. Acknowledgement based mechanism may be useful to prevent it. V. CONCLUSION In this paper, various trust strategies used in semantic web are discussed and the attacks on the agent which affects the trust factor are presented. The defences to handle such attacks on agents are described. Recent research has given a number of solutions to resist attacks on agents which are responsible for calculating reputation value and establishing trust. Such solutions has been studied and presented in this paper. However day by day new attacks are emerging and it is difficult to detect them for existing defence techniques. All these techniques are effective in detecting and preventing predefined attacks. Therefore there is a requirement of such defensive technique which can detect and withstand all emerging attacks along with the pre-defined attacks.
REFERENCES [1] [2] T. Berners-Lee, J. Hendler, and O. Lassila. The semantic web. Scientific American May 2001. Zaki Malik and Athman Bouguettaya. RATEWeb: Reputation assessment for Trust establishment among web services. 2009 Springer, the VLDB Journal, Volume 18, Number 4, 885-911. Jennifer Golbeck, James Hendler. Inferring reputation on semantic web. WWW 2004, May 17-22, 2004, New York, NY USA. ACM.

A) Defense for Unfair Rating: A strategy often proposed in the literature for detecting possible unfair ratings is to compare ratings about the same service entity provided by different agents and to use ratings from a priori trusted agents as a benchmark. The defense against the bad unfair rating has three fold [17]. First, the direct trust and the recommendation trust records are maintained separately. Only the entities who have provided good recommendations can earn high recommendation trust. Second a necessary condition to trust propagation is that trust can propagate along path A B Y if the recommendation trust between A and B is greater than a threshold. Third, besides the action trust, the recommendation trust is treated as an additional dimension in the malicious entity detection process. As a result, if a agent has low recommendation trust, its recommendations will have minor influence on good agent decision-making, and it can be detected as malicious and expelled from the network. B) Defense for Malicious Consortium:

To defend against on-off attack past as well as present rating is considered. A long term interaction with good behaviour builds a high reputation. Only little bad behaviour can decrease the reputation values. Hence bad behaviour should remember for a long time than good behaviour of an agent [27]. D) Defense for Sybil and Newcomer attack: There are many approaches given to counter the Sybil attack. [18,19]. Defending against Sybil attacks and

IJ
C) Defense for On-Off Attack:
ISSN: 2230-7818

Measure the reputation of the agents dynamically. Measure the reputation based on feedback from multiple agents, count the trust level provided by different agents and differentiate the maximum and minimum ratings. This way one can point out the group of malicious agents which falsely rate the trust level of each other [17, 27].

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

ES
[3]

It is important to withstand the attacks defined in previous section in order to provide a secure communication. A lot of researchers are exploring to prevent such attacks on agents in semantic web. In this section, defensive methods are presented to resist attacks.

Page 63

Kedar Nath Singh et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 6, Issue No. 1, 060 - 064
[4] Matthew Richardson, Rakesh Agrawal and Pedro Domingos. Trust Management for Semantic Web, the second international Semantic Web conference, Senibel, Island, 2003, pp 351-368. F. R. Schreiber, Sybil, Warner Books, 1973. John R. Douceur. The Sybil Attack, Peer-to-peer Systems, 2002 Springer. Roger M. Needham. Denial of service, CCS '93 Proceedings of the 1st ACM conference on Computer and communication security. A. Josang, R. Ismail, and C. Boyd, A Survey of Trust and Reputation Systems for Online Service Provision, Decision Support Sys.vol 43- no. 2, 2005, pp. 61844. Katebi, M., Katebi S.D, Trust Models Analysis for the Semantic Web. IEEE 2009 Second International Conference on Developments in e-Systems Engineering (DESE). Golbeck, B Parsia Trust networks on the semantic web Cooperative Information Agents VII, Spriger , Lecture Notes in Computer Science, 2003, Volume 2782/2003, 238-249, DOI: 10.1007/978-3-540-45217-1_18. C. Dellarocas, Mechanisms for Coping with Unfair Ratings and Discriminatory Behavior in Online Reputation Reporting Systems, Proc. 21st Intl. Conf. Info. Sys., Brisbane, Queensland, Australia, Dec. 2000. S. Buchegger and J. L. Boudec, Coping with False Accusations in Misbehavior Reputation Systems for Mobile Ad-Hoc Networks, EPFL tech. rep. IC/2003/31, EPFL-DI-ICA, 2003. A. Cheng and E. Friedman. Sybilproof reputation mechanisms. In ACM P2PEcon, 2005. M. Feldman, K. Lai, I. Stoica, and J. Chuang. Robust incentive techniques for peer-to-peer networks. In ACM ElectronicCommerce,04 J. Hopcroft and D. Sheldon. Manipulation-resistant reputations using hitting time. In WAW, 2007. H. Rowaihy, W. Enck, P. Mcdaniel, and T. La-Porta. Limiting Sybil attacks in structured peer-to-peer networks. In INFOCOM, 2007. Yan, Zhu Han, Ray Liu, Defence of trust management vulnerabilities in distributed network, Communications Magazine, IEEE, Volume: 46 Issue: 2. Haifeng Yu, Chenwei Shi, Michael Kaminsky, Phillip B. Gibbons and Feng Xiao DSybil: Optimal Sybil-esistance for Recommendation Systems 2009 30th IEEE Symposium on Security and Privacy. Haifeng Yu, Michael Kaminsky, Phillip B. Gibbons, Abraham D. Flaxman SybilGuard: Defending Against Sybil Attacks via Social Networks In SIGCOMM '06: Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications (New York, NY, USA, 2006), ACM, pp. 267-278. Singh, A. and Liu, L. 2003. TrustMe: anonymous management of trust relationships in decen- tralized P2P systems. In Third International Conference on Peer-to-Peer Computing, 2003.P2P 2003). 142149. Audun Jsang, Jennifer Golbeck Challenges for Robust Trust and Reputation Systems, 5th International Workshop on Security and Trust management (STM 2009),Saint Malo, France, September 2009. Suriadi Suriadi, Andrew Clark, and Desmond Schmidt Validating Denial of Service Vulnerabilities in Web Services, 2010 Fourth International Conference on Network and System Security. Kieron OHara, Harith Alani, Yannis Kalfoglou, and Nigel Shadbolt Trust Strategies for the Semantic Web Intelligence, Agents, Multimedia Group, School of Electronics and Computer Science, University of Southampton, UK. [24] [25] [26] [27] [28] K. OHara. Trust: From Socrates to Spin. Icon Books, Cambridge, 2004. Y. Gil, V. Ratnakar. Trusting Information Sources One Citizen at a Time. Proc. 1st Int. Semantic. Web Conf. (ISWC), Sardinia, Italy, 2002. http://www.foaf-project.org. Yan, Zhu Han, Ray Liu. A Trust Evaluation Framework in Distributed network: Vulnerability Analysis and Defense against attacks. L. Felipe Perrone, Samuel C. Nelson. A Study of On-Off Attack Models for wireless Adhok networks. in First International Workshop on Operator Assisted (Wireless Mesh) Community Networks (OpComm 2006), Berlin, Germany, September 2006.

[5] [6] [7] [8]

[9]

[10]

[12]

[13] [14] [15] [16]

[17]

[18]

[19]

IJ
[20] [21] [22] [23]

ISSN: 2230-7818

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

ES
Page 64

[11]

Você também pode gostar