Bad, Better, Best in IT

Bad: Cubicle 

Better: Office 

Best: Whatever works best for you.

 

Bad: Meetings without agendas. 

Better: Meetings with agendas. 

Best: Meetings whose need is so obvious to everyone that no agenda is needed.

 

Bad: Specs, waterfall, Systems Development Life Cycle. 

Better: Prototyping, agile, scrum. 

Best: Developers with enough domain knowledge to just build it.

 

Bad: No documentation. 

Better: Documentation. 

Best: No documentation needed.

 

Bad: No formal process. 

Better: Formal process. 

Best: People so much bigger than their jobs so that process is rarely 

relied upon.

 

Bad: Theory without experience 

Better: Experience without theory 

Best: Both

 

Bad: Help desk without programmers. 

Better: Programmers available to customers. 

Best: Code that just works.

 

Bad: Phone calls 

Better: Emails 

Best: Application software that encapsulates required communication

 

Bad: Code with early exits. 

Better: Code without early exits. 

Best: Code so simple because of the underlying data structure that the 

early exit debate is moot.

 

Bad: Bugs 

Better: Fixes 

Best: Enough 9's to never notice.

 

Bad: programmer error 

Better: user error 

Best: What's an error?

 

Bad: Missing deadlines 

Better: Hitting deadlines 

Best: A track record so good that deadlines are never given

 

Bad: Complex org chart 

Better: Simple org chart 

Best: Technology so sophisticated, less people are needed

 

Bad: Non-technical boss 

Better: Technical boss 

Best: No boss

 

Bad: Management 

Better: Leadership 

Best: Self-motivation

 

Bad: Best practices, with a Capital "B" (industry standards) 

Better: best practices, with a small "b" (what we figured out) 

Best: Just do your fucking job.

### AUC-ROC Score in Machine Learning #### Definition of AUC-ROC The Area Under the Curve - Receiver Operating Characteristic (AUC-ROC) is a performance measurement for classification problems at various thresholds settings. ROC curves plot True Positive Rate (TPR) against False Positive Rate (FPR). The TPR is synonymous with sensitivity or recall, while FPR can be calculated as one minus specificity. An ideal point on this curve would achieve maximum true positives and minimum false positives. #### Calculation Method To calculate the AUC value, multiple methods exist but generally involve plotting points where each represents different threshold values from which predictions are made by classifiers. For every possible cutoff between classes, compute both rates mentioned above until all unique probability scores have been used up. Then connect these dots smoothly to form an area under such formed curve that ranges theoretically within [0,1]. Higher areas indicate better overall ability across any chosen discrimination boundary without needing prior knowledge about what exact level might work best during deployment phases later down line[^1]. ```python from sklearn import metrics import matplotlib.pyplot as plt # Assuming y_true contains actual labels and y_pred_prob contains predicted probabilities. fpr, tpr, _ = metrics.roc_curve(y_true, y_pred_prob) auc = metrics.auc(fpr, tpr) plt.plot(fpr,tpr,label="data 1, auc="+str(auc)) plt.legend(loc=4) plt.show() ``` #### Application Scenarios In practical applications like financial lending described elsewhere[^2], AUC helps assess how well models distinguish between good borrowers who repay loans versus bad ones likely defaulting over time frames agreed upon contractually beforehand. By evaluating through cross-validation techniques repeatedly split datasets into training/testing sets ensuring robustness generalizability outside seen samples only thus far encountered previously throughout experimentation stages conducted internally before going live externally facing customers directly eventually after thorough testing cycles completed satisfactorily meeting predefined criteria set out initially when project commenced planning phase started originally some period ago now coming closer towards final implementation soon hopefully barring unforeseen complications arise unexpectedly causing delays pushing back timelines further than anticipated currently planned schedule allows ideally speaking optimistically looking forward positively ahead into future prospects lying just beyond immediate horizon awaiting us all together collectively moving forward progressively step-by-step cautiously yet confidently nonetheless steadily marching onward toward achieving ultimate goals objectives aims targets milestones benchmarks whatever terminology preferred most aptly describes desired end results sought after ultimately here today discussing specifically around topic area concerning itself primarily centered upon machine learning evaluation metric known formally recognized widely accepted commonly referred simply put as "AUC". --related questions-- 1. What other evaluation metrics complement AUC-ROC? 2. How does class imbalance affect AUC interpretation? 3. Can you explain the difference between precision-recall curves and ROC curves? 4. In which scenarios should one prefer using AUC over accuracy?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值