Experiments
Learning happens through experimentation. These are my technical explorations — hypotheses tested, discoveries made, and lessons learned.
Neural Network from Scratch
Hypothesis
Implementing backpropagation manually deepens understanding of how neural networks learn.
Discovery
The math is simpler than expected once you break it down. Gradient descent is elegant.
Use in Production?
NoUse frameworks for production, but understand what happens under the hood.
Feature Importance Methods Comparison
Hypothesis
Different feature importance methods give different rankings. Which should you trust?
Discovery
SHAP values are more reliable than built-in feature importance for correlated features.
Use in Production?
YesUse SHAP for interpretable features, permutation importance for model-agnostic ranking.
Data Augmentation for Tabular Data
Hypothesis
Can SMOTE-like techniques actually improve model performance on imbalanced datasets?
Discovery
Mixed results. SMOTE helps on some datasets but hurts on others. Threshold tuning often works better.
Use in Production?
It DependsTry threshold moving first. SMOTE is situational, not a universal solution.
How I Run Experiments
Hypothesis
Define what I want to learn or prove
Setup
Build minimal test environment
Execute
Run tests and collect evidence
Document
Share learnings, especially failures