Experiments

Learning happens through experimentation. These are my technical explorations — hypotheses tested, discoveries made, and lessons learned.

✓ CompletedLearningJanuary 2025

Neural Network from Scratch

Hypothesis

Implementing backpropagation manually deepens understanding of how neural networks learn.

Discovery

The math is simpler than expected once you break it down. Gradient descent is elegant.

PythonNumPy

Use in Production?

No

Use frameworks for production, but understand what happens under the hood.

✓ CompletedPerformanceDecember 2024

Feature Importance Methods Comparison

Hypothesis

Different feature importance methods give different rankings. Which should you trust?

Discovery

SHAP values are more reliable than built-in feature importance for correlated features.

PythonScikit-learnSHAP

Use in Production?

Yes

Use SHAP for interpretable features, permutation importance for model-agnostic ranking.

✓ CompletedToolingNovember 2024

Data Augmentation for Tabular Data

Hypothesis

Can SMOTE-like techniques actually improve model performance on imbalanced datasets?

Discovery

Mixed results. SMOTE helps on some datasets but hurts on others. Threshold tuning often works better.

Pythonimbalanced-learnScikit-learn

Use in Production?

It Depends

Try threshold moving first. SMOTE is situational, not a universal solution.

How I Run Experiments

1

Hypothesis

Define what I want to learn or prove

2

Setup

Build minimal test environment

3

Execute

Run tests and collect evidence

4

Document

Share learnings, especially failures