雅虎2015校招--研究工程师

本文汇总了雅虎校园招聘中出现的部分算法题目,包括LeetCode上的Insert Interval和Interleaving String问题,以及指针操作、任意数制转换等编程挑战。文章还探讨了机器学习中的过拟合问题,并提供了相关真题解析。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

今年雅虎校招看到的部分题目

 

Section C:

2. insert interval (leetCode)

@喵星人与汪星人这篇http://huntfor.iteye.com/blog/2085095

3. Inerleaving (leetCode)

还是@喵星人与汪星人http://huntfor.iteye.com/blog/2086539, 引了一位FB大神的代码,很简洁。

这是GeeksforGeeks的 http://www.geeksforgeeks.org/check-whether-a-given-string-is-an-interleaving-of-two-other-given-strings-set-2/

这个有个思辨的过程,有启发 http://www.cnblogs.com/lichen782/p/leetcode_interleaving_string.html

 

Section B:

11. void *pStr; myStruct myArray[10]; pStr = myArray; 问pStr怎么增加?

选项记不清了,贴一篇巩固下指针http://m.blog.csdn.net/blog/tomlingyu/4013376

12. 任意数制转换

就是这段代码http://stackoverflow.com/questions/8889733/converting-number-into-different-notations

 

Section D:

8 points] True or False? If true, explain why in at most two sentences. If false, explain why
or give a brief counterexample in at most two sentences.
² (True or False?) The error of a hypothesis measured over its training set provides a
pessimistically biased estimate of the true error of the hypothesis.
Solutions:
False. The training error is optimisticly biased since it's biased while usually smaller
than the true error.
² (True or False?) If you are given m data points, and use half for training and half
for testing, the di®erence between training error and test error decreases as m increases.
Solutions:
True. As we have more and more data, training error increases and testing error decreases. And they all converge to the true error.
² (True or False?) Over¯tting is more likely when the set of training data is small
Solutions:
True. With small training dataset, it's easier to ¯nd a hypothesis to ¯t the training data
exactly,i.e., over¯t.
² (True or False?) Over¯tting is more likely when the hypothesis space is small
Solutions:
False. We can see this from the bias-variance trade-o®. When hypothesis space is small,
it's more biased with less variance. So with a small hypothesis space, it's less likely to
¯nd a hypothesis to ¯t the data very well,i.e., over¯t

发现是十年前CMU应该是ML专业他们的其中考题,⊙﹏⊙b汗死!

3. 关于最大似然估计的推导题,和wiki的例子很像http://zh.wikipedia.org/zh-cn/%E6%9C%80%E5%A4%A7%E4%BC%BC%E7%84%B6%E4%BC%B0%E8%AE%A1

转载于:https://www.cnblogs.com/ffan/p/4025236.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值