Mohammad Reza Rajati1, Jerry Mendel1, Dongrui Wu2 1University of Southern California 2GE Global Research

Kolmogorov→Dempster→Zadeh Zadeh: “…[Various theories of uncertainty such as] fuzzy logic and probability theory are complementary rather than competitive”

Most Swedes are tall. Most tall Swedes are blond. What is the probability that Magnus (a Swede picked at random) is blond?

Involves linguistic quantifiers (most) and linguistic attributes (tall. Therefore categorized as a prototypical advanced CWW problem. blond) An implicit assignment of the linguistic value “Most” to: the portion of Swedes who are tall the portion of tall Swedes who are blond. .

µQ ( y ))) 1 z = xy 1 2 At least is the following operation: µ At least (Q ) ( x) = sup( µQ ( y )) y≤ x .Q1 A’s are B’s Q2(A and B)’s are C’s Q1 x Q2A’s are (B and C)’s At least (Q1 xQ2 ) A’s are C’s x is the multiplication of two fuzzy sets via: µQ ×Q 2 ( z ) = sup(min( µQ ( x).

At least 40% of the students of the EE Department at USC are on F1 visa .50% of the students of the EE Department at USC are graduate students. 80% of the graduate students of the EE Department at USC are on F1 visa. 50% ×80% of the graduate students of the EE Department at USC are on F1 visa.

B= tall. A= Swede. Most is modeled as a monotonic quantifier and therefore At least (Most2)=Most2 . Q2=Most. At least (MostxMost)=Most2 Swedes are both tall and blond.In Magnus problem: Q1= Most. C=blond Therefore.

and directly concludes that: LProb(Magnus is blond)=MostxMost=Most2 .Zadeh interprets a linguistic constraint on the portion of a population as a linguistic probability (LProb).

We construct a MF for Most: .

Almost Certain. Unlikely. Very likely. Absolutely Certain . Very unlikely. Moderately likely.We construct a vocabulary of type-1 fuzzy probabilities to translate the solution to a word: Absolutely improbable. Almost improbable. Likely.

MF of the words are shown here: .

The MF of Most2 is depicted in the following: .

We compute the Jaccard’s similarity between Most2 and the members of the vocabulary It is concluded that “It is Likely that Magnus is tall” .

Most Swedes are Tall A few Swedes are not Tall We generally have the following syllogism: Q A’s are B’s ¬Q A’s are not B’s µ¬Q (u ) = µQ (1 − u ) µnot B (u ) = 1 − µ B (u ) .

All of them or none of them can be blond . we do not know about the distribution of blonds among those few Swedes who are not tall.Similarly: Most tall Swedes are blond A few tall Swedes are not blond However.

The available information is summarized in the following tree: .

so: Most × Most + Few × one L Pr ob = Most + Few − In the optimistic case. all of Swedes who are not tall is blond. so: Most × Most + Few × All L Pr ob = Most + Few + . none of Swedes who are not tall is blond.In the pessimistic case.

LProb(blond|Swede) =LProb(tall|Swede) × LProb(blond|tall and Swede)+ LProb(¬tall|Swede) × LProb(blond|¬tall and Swede) Assuming LProb(blond|¬tall and Swede) is either None or All yields LProb.(Magnus is blond) or LProb+(Magnus is blond). .

and a vocabulary of linguistic probabilities .All and None are modeled as singletons: 1 u = 0 µ one (u ) =  0 otherwise 1 u = 1 µ All (u ) =  0 otherwise We also construct models for Most and Few.

MF’s of T2FS models of Most and Few: .

We construct a vocabulary of linguistic probabilities to decode the solution to a word: .

The pessimistic and optimistic linguistic probabilities are depicted here: .

The Jaccard’s similarities between the solutions and the members of the vocabulary are demonstrated in the following table: .

“The probability that Magnus is blond is between Likely and Very Likely” Using the average centroids of the solutions we can also say that: “The probability that Magnus is blond is between around 80% and around 89%.” .

Linguistic Approximation is similar to rounding numeric values The resolution of the vocabulary is important When vocabularies are small. the pessimistic and optimistic probabilities may map to the same word We studied the effect of the size of vocabularies on the decoded solution .

Vocabularies with different sizes: .



Tables show the similarities of the solutions with members of each of the vocabularies .

. which is Likely for the first vocabulary.Using all of these vocabularies. For small vocabularies. and is Very Likely for others. the total ignorance present in the problem does not affect the outcome. both the pessimistic and the optimistic solutions map to the same word.

Novel Weighted Averages are promising when dealing with linguistic probabilities Our solution builds a probability model for the problem which obeys a set of axioms Is the problem really reduced to calculating the belief and plausibility of a DempsterShafer Model? .

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.