Machine Learning
-
1. Introduction and KNN
- A computer program is said to
-
2. Perceptron
- A set of examples is `linearly separable` if there exists a linear decision boundary that can separate the points
-
3. Nonparametric Methods
- use of probability distributions that have specific functional forms governed by a small number of parameters whose values are to be determinied fro...
-
4. Introduction to ML Concepts
- **Inductive Bias**
-
5. Naive Bayes
- **Dataset $\mathcal{D}$**: i.i.d (independent and identically distributed) drawn from some unknonw distribution
-
6. Linear Regression
$$
-
7. SVM
> [!CAUTION]
-
8. Empirical Risk Minimization (ERM)
$$
-
9. Gradient Descent
- Function $f: \R^d \rightarrow \R $ is `convex` if $\forall x_1 \in \R^d, x_2 \R^d, 0 \le t \le 1$
-
10. Bias-Variance Tradeoff
- $\mathcal{D} = \{(x^{(i)}, y^{(i)})\}_{i = 1}^N$
-
11. Cross Validation and Model Selection
- Let's say we have the following two models:
-
Homework 1 Solutions
Solutions to CS 446/ECE 449 Homework 1 covering K-NN, Perceptron Algorithm, and MLE/MAP estimation
-
Homework 2 Solutions
Solutions to ML Homework 2 covering Naive Bayes classification, Gaussian Naive Bayes, Logistic Regression theory, and optimization techniques including gradient descent variants
-
Homework 3 Solutions
Solutions to ML Homework 3 covering Support Vector Machines (SVM), dual optimization, kernel methods, and linear regression techniques including Ridge and Lasso with ISTA
-
Homework 4 Solutions
Solutions to ML Homework 4 covering bias-variance decomposition in Ridge Regression, optimal classifier under squared loss, and model selection using k-fold cross-validation
-
Supplement: Binary Cross-Entropy Loss Derivation
For a single training example:
-
Supplement: Full SVM Derivation
$$
-
Supplement: NumPy Tutorial
> [!IMPORTANT]
-
Supplement: Probability Distributions
- The number of successes in a fixed number of independent trials, each with the same probability of success.
-
Supplement: Representer Theorem
The proof of representer theorem