Chapter 10 of 11
Appendix: Common Traps (Master List)
<details> <summary><strong>Chapter 0: What AI Actually Is</strong></summary>- Treating AI outputs as truth
- Assuming AI understands context
- "It works on my test set, ship it!"
- Anthropomorphizing the model
- Not looking at your data
- Trusting data providers
- Ignoring missing data patterns
- Not versioning data
- Memorizing formulas without understanding
- Getting stuck in math rabbit holes
- Skipping linear algebra
- Treating probability as just counting
- Not using cross-validation
- Tuning hyperparameters on the test set
- Ignoring class imbalance
- Forgetting about feature scaling
- Not normalizing inputs
- Using sigmoid for hidden layers
- Not shuffling data
- Forgetting to set model to eval mode
- Not checking for NaNs
- Trusting LLM outputs without verification
- Using LLMs for tasks requiring reasoning
- Ignoring cost
- Not handling edge cases
- Over-relying on LLMs
- Not versioning prompts
- Ignoring latency
- No fallback logic
- Deploying and forgetting
- Optimizing for accuracy alone
- Not planning for retraining
- Adding AI because it's trendy