Você está na página 1de 6

RESULTS AND DISCUSSION

We keep monitoring the average strictness level of existing policies in each category of images. The average
strictness level is defined as follows: where Lpi denote the strictness level of policy Pi, and Np is the total
number of policies Notice that the average strictness level is computed by excluding outlier policies. This is
because in some situations, users may define special policies which have a very different strictness level from
most of others, either much stricter or much looser. Considering such outliers into the average strictness level
calculation would not represent the average case properly. Therefore, when a policy is inserted, we first compare

its strictness level with current average strictness level. If the difference is more than a threshold (), we put the

policy in the outlier group. In the experiments, we set to 4 because each role of the policy subject has 4

different strictness levels. Also, the change on the policy preferences being more than 4 is considered prominent
as it exceeds one quarter of the maximum strictness level. As time evolves, the average strictness levels in each
category form a curve.
Formulae:

L= l- (1- ) ,

Tables:
Table 1. Result of Direct User Evaluation

Item Type Count Ratio

Total Policies 1025 94%


Exactly Matched Policies 944 92.1%
Policies with 1 error 67 6.4%
Policies with 2 errors 10 1.1%
Policies with 3 errors 4 0.4%

RESULTS
We evaluate the effectiveness of our Proposed system in terms of the policy prediction
accuracy and user acceptability. The result reveals that in the earlier system privacy policies
are applied to contents which are uploaded by user. But in this, system generates privacy
policies to each user profile instead of contents which provides more security.
For System the input data is shown in Table 1. i.e. number of images or contents shared by
multiple users.

Table 1: Input Dataset


#users #content shared #classes

10 30 5
20 70 7
30 50 7

Result evaluation for our System is shown in Table 2, number of users who uploads different
kind of images or shared their images. Number of actions performed on it i.e. view, comment,
tag, upload and download.

Table 2: Result Evaluation

User #TCA #UC #CNM #PP PR RC


Actions
View 20 5 4 3 0.80 0.75
Comment 32 22 17 16 0.77 0.94
Upload 50 12 9 8 0.75 0.88
Tag 30 14 12 10 0.85 0.83
Download 22 20 18 15 0.90 0.83

#TCA= #total content shared #UC= #user contents #PP= #privacy prediction
#CNM=#content with no match PR=Precision, RC=Recall

Each user was given a distinct set of images taken from the Picalert project data set including
Flickr images on various subjects and different privacy sensitivity. On average, each user
labeled images with their policies and added two to three tags each.
Here we give the image as an input and output for this is number of policies. For every
uploaded image by user numbers of policies are predicted. When we test our system we got
the accuracy of correctly predicting a policy is 90% where the accuracy we got while testing
using existing approach is 70%.

Table no. 3. Accuracy obtained by existing approach and proposed approach in percentage
Existing Proposed
70 90
Fig 2. Accuracy obtained by existing approach and proposed approach in percentage

From the above table no.3 we can see the difference generated for the results of different
approaches for proposed and existing system. Table No.3 shows the accuracy obtained by
existing approach as well as proposed approach .Table No.4 shows accuracy obtained by
individual approaches .In existing system when we used content based approach ,metadata
based approach and combination of content and metadata based approach accuracy obtained
is like 70%,90%,90%. With same approach in proposed system accuracy obtained is like
80%, 90%, 95%.

Table no.4 Accuracy obtained by different method in percentage


Content Based Metadata based Content with Metadata
70 90 90
80 90 95
Fig 3. Accuracy obtained by different method in percentage

Precision and Recall


Precision is the fraction of retrieved instances that are relevant, while recall is the fraction of
relevant instances that are retrieved. Both precision and recall are therefore based on an
understanding and measure of relevance In this system precision and recall is calculated as
follows:

No of Image content with No match


Precision = _________________________________________
No of images Shared

No of Correct Predicting Images


Recall = ___________________________________________
No of images Shared
Fig 4. Graph of precision and Recall
The Fig 4 shows graph of precision and recall of each action of user that is view, comment, upload, tag and download.

A3P-Core
Our first experiment compares A3P-core with alternative prediction approaches. In particular,
we use a straw man solution as the baseline approach, whereby we sample at random a small
set of image settings from the same user and use them to determine a baseline setting (by
counting the most frequent items). The baseline settings are applied to all images of the users.
Further, we compare the A3Pcore with two variants of itself, in order to evaluate the
contribution of each component in the A3P-core made for privacy prediction. The first
variant uses only content-based image classification followed by our policy mining
algorithm, denoted as Content Mining. The second variant uses only tag classification
followed by the policy mining, denoted as Tag Mining. All the algorithms were tested
against the collected real user policies. Fig. 4 shows the percentage of predicted policies in
four groups:

A3P comparative performance


Exact Match means a predicted policy is exactly the same as the real policy of the same
image; xcomponent Match means a predicted policy and its corresponding real policy have
x components (i.e., subject, action, condition) fully matched; No match simply means that
the predicted policy is wrong for all components. As shown in the figure, each component of
the A3P-core singularly contributes toward policy prediction, however, none of them
individually equalizes the accuracy achieved by the A3P-core in its entirety. Specifically,
A3P-core has 90 percent exact match and 0 no match. Moreover, pairwise comparisons were
made between A3P-core, Content Mining, Tag Mining and the baseline algorithm,
corrected using a Bonferroni method.

Você também pode gostar