A. For these values of θ0 and θ1 that satisfy J(θ0,θ1)=0, we have that hθ(x(i))=y(i) for every training example (x(i),y(i))
这种情况是指所有的数据都在拟合的曲线上这种情况
B. For this to be true, we must have y(i)=0 for every value of i=1,2,…,m.
这项是错的,应为我们要保证的是J()=0,而不是y(i)
C. Gradient descent is likely to get stuck at a local minimum and fail to find the global minimum.
反例:线性回归函数没有局部最优值,所以其梯度下降函数也不会困在局部最小值里
D.We can perfectly predict the value of y even for new examples that we have not yet seen. (e.g., we can perfectly predict prices of even new houses that we have not yet seen.)
perfectly这一词太过绝对