Você está na página 1de 31

Mohammad Reza Rajati1, Jerry Mendel1, Dongrui Wu2 1University of Southern California 2GE Global Research

KolmogorovDempsterZadeh Zadeh: [Various theories of uncertainty such as] fuzzy logic and probability theory are complementary rather than competitive

Most Swedes are tall. Most tall Swedes are blond. What is the probability that Magnus (a Swede picked at random) is blond?

Involves linguistic quantifiers (most) and linguistic attributes (tall, blond) An implicit assignment of the linguistic value Most to: the portion of Swedes who are tall the portion of tall Swedes who are blond. Therefore categorized as a prototypical advanced CWW problem.

Q1 As are Bs Q2(A and B)s are Cs Q1 x Q2As are (B and C)s At least (Q1 xQ2 ) As are Cs x is the multiplication of two fuzzy sets via:
Q Q 2 ( z ) = sup(min( Q ( x), Q ( y )))
1

z = xy

At least is the following operation:


At least (Q ) ( x) = sup( Q ( y ))
y x

50% of the students of the EE Department at USC are graduate students. 80% of the graduate students of the EE Department at USC are on F1 visa. 50% 80% of the graduate students of the EE Department at USC are on F1 visa. At least 40% of the students of the EE Department at USC are on F1 visa

In Magnus problem: Q1= Most, Q2=Most, A= Swede, B= tall, C=blond Therefore, At least (MostxMost)=Most2 Swedes are both tall and blond. Most is modeled as a monotonic quantifier and therefore At least (Most2)=Most2

Zadeh interprets a linguistic constraint on the portion of a population as a linguistic probability (LProb), and directly concludes that: LProb(Magnus is blond)=MostxMost=Most2

We construct a MF for Most:

We construct a vocabulary of type-1 fuzzy probabilities to translate the solution to a word: Absolutely improbable, Almost improbable, Very unlikely, Unlikely, Moderately likely, Likely, Very likely, Almost Certain, Absolutely Certain

MF of the words are shown here:

The MF of Most2 is depicted in the following:

We compute the Jaccards similarity between Most2 and the members of the vocabulary

It is concluded that It is Likely that Magnus is tall

Most Swedes are Tall A few Swedes are not Tall We generally have the following syllogism: Q As are Bs Q As are not Bs
Q (u ) = Q (1 u ) not B (u ) = 1 B (u )

Similarly: Most tall Swedes are blond A few tall Swedes are not blond However, we do not know about the distribution of blonds among those few Swedes who are not tall. All of them or none of them can be blond

The available information is summarized in the following tree:

In the pessimistic case, none of Swedes who are not tall is blond, so:
Most Most + Few one L Pr ob = Most + Few

In the optimistic case, all of Swedes who are not tall is blond, so:
Most Most + Few All L Pr ob = Most + Few
+

LProb(blond|Swede) =LProb(tall|Swede) LProb(blond|tall and Swede)+ LProb(tall|Swede) LProb(blond|tall and Swede) Assuming LProb(blond|tall and Swede) is either None or All yields LProb- (Magnus is blond) or LProb+(Magnus is blond).

All and None are modeled as singletons:


1 u = 0 one (u ) = 0 otherwise 1 u = 1 All (u ) = 0 otherwise

We also construct models for Most and Few, and a vocabulary of linguistic probabilities

MFs of T2FS models of Most and Few:

We construct a vocabulary of linguistic probabilities to decode the solution to a word:

The pessimistic and optimistic linguistic probabilities are depicted here:

The Jaccards similarities between the solutions and the members of the vocabulary are demonstrated in the following table:

The probability that Magnus is blond is between Likely and Very Likely Using the average centroids of the solutions we can also say that: The probability that Magnus is blond is between around 80% and around 89%.

Linguistic Approximation is similar to rounding numeric values The resolution of the vocabulary is important When vocabularies are small, the pessimistic and optimistic probabilities may map to the same word We studied the effect of the size of vocabularies on the decoded solution

Vocabularies with different sizes:

Tables show the similarities of the solutions with members of each of the vocabularies

Using all of these vocabularies, both the pessimistic and the optimistic solutions map to the same word, which is Likely for the first vocabulary, and is Very Likely for others. For small vocabularies, the total ignorance present in the problem does not affect the outcome.

Novel Weighted Averages are promising when dealing with linguistic probabilities Our solution builds a probability model for the problem which obeys a set of axioms Is the problem really reduced to calculating the belief and plausibility of a DempsterShafer Model?

Você também pode gostar