Quiznetik

Machine Learning (ML) | Set 8

1. The objective of the support vector machine algorithm is to find a hyperplane in an N-dimensional space(N — the number of features) that distinctly classifies the data points.

Correct : A. true

2. Hyperplanes are _____________boundaries that help classify the data points.

Correct : B. decision

3. The _____of the hyperplane depends upon the number of features.

Correct : A. dimension

4. Hyperplanes are decision boundaries that help classify the data points.

Correct : A. true

5. SVM algorithms use a set of mathematical functions that are defined as the kernel.

Correct : A. true

6. In SVR we try to fit the error within a certain threshold.

Correct : A. true

7. What is the purpose of performing cross-validation?

Correct : C. Both A and B

8. Which of the following is true about Naive Bayes ?

Correct : C. Both A and B

9. Which of the following is not supervised learning?

Correct : A. PCA

10. ______can be adopted when it's necessary to categorize a large amount of data with a few complete examples or when there's the need to impose some constraints to a clustering algorithm.

Correct : B. Semi-supervised

11. In reinforcement learning, this feedback is usually called as___.

Correct : C. Reward

12. In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called_____.

Correct : A. Deep learning

13. there's a growing interest in pattern recognition and associative memories whose structure and functioning are similar to what happens in the neocortex. Such an approach also allows simpler algorithms called _____

Correct : C. Modelfree

14. ______ showed better performance than other approaches, even without a context-based model

Correct : B. Deep learning

15. If two variables are correlated, is it necessary that they have a linear relationship?

Correct : B. No

16. Suppose we fit “Lasso Regression” to a data set, which has 100 features (X1,X2…X100).  Now, we rescale one of these feature by multiplying with 10 (say that feature is X1),  and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct?

Correct : B. It is more likely for X1 to be included in the model

17. If Linear regression model perfectly first i.e., train error is zero, then _____________________

Correct : C. Couldn’t comment on Test error

18. In syntax of linear model lm(formula,data,..), data refers to ______

Correct : B. Vector

19. We can also compute the coefficient of linear regression with the help of an analytical method called “Normal Equation”. Which of the following is/are true about “Normal Equation”?1. We don’t have to choose the learning rate2. It becomes slow when number of features is very large3. No need to iterate

Correct : D. 1,2 and 3.

20. Which of the following option is true regarding “Regression” and “Correlation” ?Note: y is dependent variable and x is independent variable.

Correct : D. The relationship is symmetric between x and y in case of correlation but in case of regression it is not symmetric.

21. Which of the following are real world applications of the SVM?

Correct : D. All of the above

22. Let’s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data. You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset?

Correct : D. Both A and B

23. _____which can accept a NumPy RandomState generator or an integer seed.

Correct : B. random_state

24. In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers at least_____valid options

Correct : B. 2

25. ______is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky.

Correct : A. Removing the whole line

26. It's possible to specify if the scaling process must include both mean and standard deviation using the parameters________.

Correct : C. Both A & B

27. Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias and variance with lambda.

Correct : C. In case of very large lambda; bias is high, variance is low

28. Function used for linear regression in R is __________

Correct : A. lm(formula, data)

29. In the mathematical Equation of Linear Regression Y = β1 + β2X + ϵ, (β1, β2) refers to __________

Correct : C. (Y-Intercept, Slope)

30. We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data?

Correct : D. Bias increases and Variance decreases

31. Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?1. I will add more variables2. I will start introducing polynomial degree variables3. I will remove some variables

Correct : A. 1 and 2

32. The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVM’s?

Correct : A. Large datasets

33. The effectiveness of an SVM depends upon:

Correct : D. All of the above

34. We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM

Correct : B. 1 and 2

35. Suppose you are using RBF kernel in SVM with high Gamma value. What does this signify?

Correct : B. The model would consider only the points close to the hyperplane for modeling