Mohammad Reza Rajati1, Jerry Mendel1, Dongrui Wu2 1University of Southern California 2GE Global Research

Kolmogorov→Dempster→Zadeh Zadeh: “…[Various theories of uncertainty such as] fuzzy logic and probability theory are complementary rather than competitive”

Most Swedes are tall. Most tall Swedes are blond. What is the probability that Magnus (a Swede picked at random) is blond?

. Therefore categorized as a prototypical advanced CWW problem.Involves linguistic quantifiers (most) and linguistic attributes (tall. blond) An implicit assignment of the linguistic value “Most” to: the portion of Swedes who are tall the portion of tall Swedes who are blond.

µQ ( y ))) 1 z = xy 1 2 At least is the following operation: µ At least (Q ) ( x) = sup( µQ ( y )) y≤ x .Q1 A’s are B’s Q2(A and B)’s are C’s Q1 x Q2A’s are (B and C)’s At least (Q1 xQ2 ) A’s are C’s x is the multiplication of two fuzzy sets via: µQ ×Q 2 ( z ) = sup(min( µQ ( x).

50% ×80% of the graduate students of the EE Department at USC are on F1 visa. 80% of the graduate students of the EE Department at USC are on F1 visa.50% of the students of the EE Department at USC are graduate students. At least 40% of the students of the EE Department at USC are on F1 visa .

At least (MostxMost)=Most2 Swedes are both tall and blond. Most is modeled as a monotonic quantifier and therefore At least (Most2)=Most2 . B= tall. Q2=Most. C=blond Therefore.In Magnus problem: Q1= Most. A= Swede.

Zadeh interprets a linguistic constraint on the portion of a population as a linguistic probability (LProb). and directly concludes that: LProb(Magnus is blond)=MostxMost=Most2 .

We construct a MF for Most: .

Almost improbable. Likely. Very unlikely.We construct a vocabulary of type-1 fuzzy probabilities to translate the solution to a word: Absolutely improbable. Moderately likely. Very likely. Absolutely Certain . Unlikely. Almost Certain.

MF of the words are shown here: .

The MF of Most2 is depicted in the following: .

We compute the Jaccard’s similarity between Most2 and the members of the vocabulary It is concluded that “It is Likely that Magnus is tall” .

Most Swedes are Tall A few Swedes are not Tall We generally have the following syllogism: Q A’s are B’s ¬Q A’s are not B’s µ¬Q (u ) = µQ (1 − u ) µnot B (u ) = 1 − µ B (u ) .

All of them or none of them can be blond .Similarly: Most tall Swedes are blond A few tall Swedes are not blond However. we do not know about the distribution of blonds among those few Swedes who are not tall.

The available information is summarized in the following tree: .

In the pessimistic case. none of Swedes who are not tall is blond. so: Most × Most + Few × All L Pr ob = Most + Few + . so: Most × Most + Few × one L Pr ob = Most + Few − In the optimistic case. all of Swedes who are not tall is blond.

(Magnus is blond) or LProb+(Magnus is blond). .LProb(blond|Swede) =LProb(tall|Swede) × LProb(blond|tall and Swede)+ LProb(¬tall|Swede) × LProb(blond|¬tall and Swede) Assuming LProb(blond|¬tall and Swede) is either None or All yields LProb.

All and None are modeled as singletons: 1 u = 0 µ one (u ) =  0 otherwise 1 u = 1 µ All (u ) =  0 otherwise We also construct models for Most and Few. and a vocabulary of linguistic probabilities .

MF’s of T2FS models of Most and Few: .

We construct a vocabulary of linguistic probabilities to decode the solution to a word: .

The pessimistic and optimistic linguistic probabilities are depicted here: .

The Jaccard’s similarities between the solutions and the members of the vocabulary are demonstrated in the following table: .

“The probability that Magnus is blond is between Likely and Very Likely” Using the average centroids of the solutions we can also say that: “The probability that Magnus is blond is between around 80% and around 89%.” .

the pessimistic and optimistic probabilities may map to the same word We studied the effect of the size of vocabularies on the decoded solution .Linguistic Approximation is similar to rounding numeric values The resolution of the vocabulary is important When vocabularies are small.

Vocabularies with different sizes: .

.

.

Tables show the similarities of the solutions with members of each of the vocabularies .

.Using all of these vocabularies. and is Very Likely for others. which is Likely for the first vocabulary. both the pessimistic and the optimistic solutions map to the same word. the total ignorance present in the problem does not affect the outcome. For small vocabularies.

Novel Weighted Averages are promising when dealing with linguistic probabilities Our solution builds a probability model for the problem which obeys a set of axioms Is the problem really reduced to calculating the belief and plausibility of a DempsterShafer Model? .

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.