From b1c4da2087385e856304821c810e6bdc374e4e89 Mon Sep 17 00:00:00 2001 From: r-keller Date: Mon, 1 Oct 2018 19:47:36 -0400 Subject: [PATCH] edit --- book/thy.tex | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/book/thy.tex b/book/thy.tex index 265d639..31f6607 100644 --- a/book/thy.tex +++ b/book/thy.tex @@ -161,7 +161,7 @@ \section{Probably Approximately Correct Learning} at most $5\%$ error. If this situtation is guaranteed to happen, then this hypothetical learning algorithm is a PAC learning algorithm. It satisfies ``probably'' because it only failed in one out of ten cases, -and it's ``approximate'' because it achieved low, but non-zero, error +and its ``approximate'' because it achieved low, but non-zero, error on the remainder of the cases. This leads to the formal definition of an $(\ep,\de)$ PAC-learning @@ -510,7 +510,7 @@ \section{Complexity of Infinite Hypothesis Spaces} dimension is the \emph{maximum} number of points for which you can always find such a classifier. -\thinkaboutit{What is that labeling? What is it's name?} +\thinkaboutit{What is that labeling? What is its name?} You can think of VC dimension as a game between you and an adversary. To play this game, \emph{you} choose $K$ unlabeled points however you