Quiznetik

Machine Learning (ML) | Set 5

1. Which of the following is true about Ridge or Lasso regression methods in case of feature selection?

Correct : B. lasso regression uses subset selection of features

2. Which of the following statement(s) can

Correct : A. 1 and 2

3. We can also compute the coefficient of linear regression with the help of an analytical method called Normal Equation. Which of the following is/are true about Normal Equation? 1. We don't have to choose the learning rate 2. It becomes slow when number of features is very large 3. No need to iterate

Correct : D. 1,2 and 3.

4. If two variables are correlated, is it necessary that they have a linear relationship?

Correct : B. no

5. Which of the following option is true regarding Regression and Correlation ?Note: y is dependent variable and x is independent variable.

Correct : D. the relationship is symmetric between x and y in case of correlation but in case of regression it is not symmetric.

6. Suppose you are using a Linear SVM classifier with 2 class classification

Correct : A. yes

7. If you remove the non-red circled points from the data, the decision boundary will change?

Correct : B. false

8. When the C parameter is set to infinite, which of the following holds true?

Correct : A. the optimal hyperplane if exists, will be the one that completely separates the data

9. Suppose you are building a SVM model on data X. The data X can be error prone which means that you should not trust any specific data point too much. Now think that you want to build a SVM model which has quadratic kernel function of polynomial degree 2 that uses Slack variable C as one of its hyper parameter.What would happen when you use very large value of C(C->infinity)?

Correct : A. we can still classify data correctly for given setting of hyper parameter c

10. SVM can solvelinearand non- linearproblems

Correct : A. true

11. The objective of the support vector machine algorithm is to find a hyperplane in an N-dimensional space(N the number of features) that distinctly classifies the data points.

Correct : A. true

12. Hyperplanes are                        boundaries that help classify the data points.

Correct : B. decision

13. The          of the hyperplane depends upon the number of features.

Correct : A. dimension

14. Hyperplanes are decision boundaries that help classify the data points.

Correct : A. true

15. SVMalgorithmsusea set of mathematical functions that are defined as thekernel.

Correct : A. true

16. In SVM, Kernel function is used to map a lower dimensional data into a higher dimensional data.

Correct : A. true

17. In SVR we try to fit the error within a

Correct : A. true

18. What is the purpose of performing cross- validation?

Correct : C. both a and b

19. Which of the following is true about Naive Bayes ?

Correct : C. both a and b

20. Which of the following isnotsupervised learning?

Correct : A. ��pca

21. can be adopted when it's necessary to categorize a large amount of data with a few complete examples or when there's the need to impose some constraints to a clustering algorithm.

Correct : B. semi-supervised

22. In reinforcement learning, this feedback is

Correct : C. reward

23. In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called         .

Correct : A. deep learning

24. there's a growing interest in pattern recognition and associative memories whose structure and functioning are similar to what happens in the neocortex. Such an approach also allows simpler algorithms called

Correct : C. modelfree

25. showed better performance than other approaches, even without a context- based model

Correct : B. deep learning

26. If two variables are correlated, is it necessary that they have a linear relationship?

Correct : B. no

27. Correlated variables can have zero correlation coeffficient. True or False?

Correct : A. true

28. Suppose we fit Lasso Regression to a data set, which has 100 features (X1,X2X100). Now, we rescale one of these feature by multiplying with 10 (say that feature is X1), and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct?

Correct : B. it is more likely for x1 to be included in the model

29. If Linear regression model perfectly first i.e., train error is zero, then

Correct : C. couldn�t comment on test error

30. Which of the following metrics can be used for evaluating regression models? i) R Squared ii) Adjusted R Squared iii) F Statistics iv) RMSE / MSE / MAE

Correct : D. i, ii, iii and iv

31. In syntax of linear model lm(formula,data,..), data refers to

Correct : B. vector

32. Linear Regression is a supervised machine learning algorithm.

Correct : A. true

33. It is possible to design a Linear regression algorithm using a neural network?

Correct : A. true

34. Which of the following methods do we use to find the best fit line for data in Linear Regression?

Correct : A. least square error

35. Suppose you are training a linear regression model. Now consider these points.1. Overfitting is more likely if we have less data2. Overfitting is more likely when the hypothesis space is small.Which of the above statement(s) are correct?

Correct : C. 1 is true and 2 is false

36. We can also compute the coefficient of linear regression with the help of an analytical method called Normal Equation. Which of the following is/are true about Normal Equation?1. We dont have to choose the learning rate2. It becomes slow when number of features is very large3. No need to iterate

Correct : D. 1,2 and 3.

37. Which of the following option is true regarding Regression and Correlation ?Note: y is dependent variable and x is independent variable.

Correct : D. the relationship is symmetric between x and y in case of correlation but in case of regression it is not symmetric.

38. In a simple linear regression model (One independent variable), If we change the input variable by 1 unit. How much output variable will change?

Correct : D. by its slope

39. Generally, which of the following method(s) is used for predicting continuous dependent variable?1. Linear Regression2. Logistic Regression

Correct : B. only 1

40. How many coefficients do you need to estimate in a simple linear regression model (One independent variable)?

Correct : B. 2

41. In a real problem, you should check to see if the SVM is separable and then include slack variables if it is not separable.

Correct : B. false

42. Which of the following are real world applications of the SVM?

Correct : D. all of the above

43. 100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, was it a man?

Correct : A. true

44. For the given weather data, Calculate probability of playing

Correct : B. 0.64

45. In SVR we try to fit the error within a certain threshold.

Correct : A. true

46. In reinforcement learning, this feedback is usually called as     .

Correct : C. reward

47. Which of the following sentence is correct?

Correct : C. both a & b

48. Reinforcement learning is particularly

Correct : D. all above

49. Lets say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data. You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset?

Correct : D. both a and b

50. Which of the following sentence is FALSE regarding regression?

Correct : D. it discovers causal relationships.

51. Which of the following method is used to find the optimal features for cluster analysis

Correct : D. all above

52. scikit-learn also provides functions for creating dummy datasets from scratch:

Correct : D. all above

53. which can accept a NumPy RandomState generator or an integer seed.

Correct : B. random_state

54. In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers at least         valid options

Correct : B. 2

55. In which of the following each categorical label is first turned into a positive integer and then transformed into a vector where only one feature is 1 while all the others are 0.

Correct : C. labelbinarizer class

56. is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky.

Correct : A. removing the whole line

57. It's possible to specify if the scaling process must include both mean and standard deviation using the parameters              .

Correct : C. both a & b

58. Which of the following selects the best K high-score features.

Correct : C. selectkbest

59. How does number of observations influence overfitting? Choose the correct answer(s).Note: Rest all parameters are same1. In case of fewer observations, it is easy to overfit the data.2. In case of fewer observations, it is hard to overfit the data.3. In case of more observations, it is easy to overfit the data.4. In case of more observations, it is hard to overfit the data.

Correct : A. 1 and 4

60. Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias and variance with lambda.

Correct : C. in case of very large lambda; bias is high, variance is low

61. What is/are true about ridge regression? 1. When lambda is 0, model works like linear regression model 2. When lambda is 0, model doesn't work like linear regression model 3. When lambda goes to infinity, we get very, very small coefficients approaching 0 4. When lambda goes to infinity, we get very, very large coefficients approaching infinity

Correct : A. 1 and 3

62. Which of the following method(s) does not have closed form solution for its coefficients?

Correct : B. lasso

63. Function used for linear regression in R

Correct : A. lm(formula, data)

64. In the mathematical Equation of Linear Regression Y?=??1 + ?2X + ?, (?1, ?2) refers to

Correct : C. (y-intercept, slope)

65. Suppose that we have N independent variables (X1,X2 Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of its variable(Say X1) with Y is -0.95.Which of the following is true for X1?

Correct : B. relation between the x1 and y is strong

66. We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the mean training error?

Correct : D. can't say

67. We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data?

Correct : D. bias increases and variance decreases

68. Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider? 1. I will add more variables 2. I will start introducing polynomial degree variables 3. I will remove some variables

Correct : A. 1 and 2

69. Problem:Players will play if weather is sunny. Is this statement is correct?

Correct : A. true

70. For the given weather data, Calculate probability of not playing

Correct : C. 0.36

71. Suppose you have trained an SVM with linear decision boundary after training SVM, you correctly infer that your SVM model is under fitting.Which of the following option would you more likely to consider iterating SVM next time?

Correct : C. you will try to calculate more variables

72. The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVMs?

Correct : A. large datasets

73. What do you mean by generalization error in terms of the SVM?

Correct : B. how accurately the svm can predict outcomes for unseen data

74. We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1.We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables 3. Feature normalization always helps when we use Gaussian kernel in SVM

Correct : B. 1 and 2

75. Support vectors are the data points that lie closest to the decision surface.

Correct : A. true

76. If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for?

Correct : C. overfitting

77. What is the purpose of performing cross- validation?

Correct : C. both a and b

78. Suppose you are using a Linear SVM classifier with 2 class classification problem. Now you have been given the following data in which some points are circled red that are representing support vectors.If you remove the following any one red points from the data. Does the decision boundary will change?

Correct : A. yes

79. Linear SVMs have no hyperparameters that need to be set by cross-validation

Correct : B. false

80. For the given weather data, what is the probability that players will play if weather is sunny

Correct : D. 0.6

81. 100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being a man

Correct : B. 0.2

82. Linear SVMs have no hyperparameters

Correct : B. false

83. What are the different Algorithm techniques in Machine Learning?

Correct : C. both a & b

84. can be adopted when it's necessary to categorize a large amount of data with a few complete examples or when there's the need to

Correct : B. semi- supervised

85. In reinforcement learning, this feedback is usually called as      .

Correct : C. reward

86. In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called          .

Correct : A. deep learning

87. What does learning exactly mean?

Correct : C. learning is the ability to change

88. When it is necessary to allow the model to develop a generalization ability and avoid a common problem called            .

Correct : A. overfitting

89. Techniques involve the usage of both labeled and unlabeled data is called      .

Correct : B. semi- supervised

90. there's a growing interest in pattern recognition and associative memories whose structure and functioning are similar to what happens in the neocortex. Such an

Correct : C. modelfree

91. showed better performance than other approaches, even without a context-based model

Correct : B. deep learning

92. Which of the following sentence is correct?

Correct : C. both a & b

93. What is ‘Overfitting’ in Machine learning?

Correct : A. when a statistical model describes random error or noise instead of

94. What is ‘Test set’?

Correct : A. test set is used to test the accuracy of the hypotheses generated by the learner.

95. what is the function of ‘Supervised Learning’?

Correct : C. both a & b

96. Commons unsupervised applications include

Correct : D. all above

97. Reinforcement learning is particularly efficient when                             .

Correct : D. all above

98. During the last few years, many              algorithms have been applied to deep neural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing the state.

Correct : D. none of above

99. Common deep learning applications include

Correct : D. all above

100. if there is only a discrete number of possible outcomes (called categories), the process becomes a            .

Correct : B. classification.