Você está na página 1de 7

5/21/2017 Coursera|OnlineCoursesFromTopUniversities.

JoinforFree|Coursera

Regularization
Quiz, 5 questions

1
point

1.
You are training a classication model with logistic

regression. Which of the following statements are true? Check

all that apply.

Adding a new feature to the model always results in equal or better performance on the training set.

Adding many new features to the model helps prevent overtting on the training set.

Introducing regularization to the model always results in equal or better performance on examples not in the training
set.

Introducing regularization to the model always results in equal or better performance on the training set.

1
point

2.

https://www.coursera.org/learn/machinelearning/exam/lehkt/regularization 1/7
5/21/2017 Coursera|OnlineCoursesFromTopUniversities.JoinforFree|Coursera

Suppose you ran logistic regression twice, once with = 0, and once with = 1. One of the times, you got

Regularization
Quiz, 5 questions 23.4
parameters = [ ] , and the other time you got
37.9

1.03
= [ ] . However, you forgot which value of
0.28

corresponds to which value of . Which one do you

think corresponds to = 1?

1.03
= [ ]
0.28

23.4
= [ ]
37.9

1
point

3.
Which of the following statements about regularization are

true? Check all that apply.

Using a very large value of cannot hurt the performance of your hypothesis; the only reason we do not set to be
too large is to avoid numerical problems.

Because regularization causes J () to no longer be convex, gradient descent may not always converge to the global
minimum (when > 0, and when using an appropriate learning rate ).

https://www.coursera.org/learn/machinelearning/exam/lehkt/regularization 2/7
5/21/2017 Coursera|OnlineCoursesFromTopUniversities.JoinforFree|Coursera

Because logistic regression outputs values 0 h (x) 1, its range of output values can only be "shrunk" slightly by
Regularization
regularization anyway, so regularization is generally not helpful for it.
Quiz, 5 questions

Using too large a value of can cause your hypothesis to undert the data.

1
point

4.
In which one of the following gures do you think the hypothesis has overt the training set?

Figure:

Figure:

https://www.coursera.org/learn/machinelearning/exam/lehkt/regularization 3/7
5/21/2017 Coursera|OnlineCoursesFromTopUniversities.JoinforFree|Coursera

Regularization
Quiz, 5 questions

Figure:

Figure:

https://www.coursera.org/learn/machinelearning/exam/lehkt/regularization 4/7
5/21/2017 Coursera|OnlineCoursesFromTopUniversities.JoinforFree|Coursera

Regularization
Quiz, 5 questions

1
point

5.
In which one of the following gures do you think the hypothesis has undert the training set?

Figure:

https://www.coursera.org/learn/machinelearning/exam/lehkt/regularization 5/7
5/21/2017 Coursera|OnlineCoursesFromTopUniversities.JoinforFree|Coursera

Regularization
Quiz, Figure:
5 questions

Figure:

Figure:

https://www.coursera.org/learn/machinelearning/exam/lehkt/regularization 6/7
5/21/2017 Coursera|OnlineCoursesFromTopUniversities.JoinforFree|Coursera

Regularization
Quiz, 5 questions

I, Vishnu Singh, understand that submitting work that isnt my own may result in permanent failure of this course or
deactivation of my Coursera account.Learn more about Courseras Honor Code

Submit Quiz

https://www.coursera.org/learn/machinelearning/exam/lehkt/regularization 7/7

Você também pode gostar