You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looking at Lucky 13: AdaBoost. A few items are a bit unclear for us newbies.
First, in the fit() method, there is just a single pass over the data, X, while the original Freund & Schapire, 1995 paper suggests looping for T iterations, refitting the classifiers on each pass based on the evolving weights. Looks like the version here is based on Zhu, et al 2009. Might be worth a few words to explain the source of the algorithm, and also why this version needs to make only one pass over the samples.
Second, just from a learning perspective, it would be great to provide a data set that mimics the illustrations in the video, just so we can verify that things work as expected. For extra credit, use MatPlotLib to create the decision boundary visualization from the video.
Third, it might be worthwhile pointing out refinements a real design would need. For example, here are the decision stubs created from the test code. Notice that feature 23 is used twice: same polarity, just different threshold. Is this a limitation of this simple example, or actually a useful quirk of AdaBoost?
Thanks much for putting this material together!
Looking at Lucky 13: AdaBoost. A few items are a bit unclear for us newbies.
First, in the
fit()
method, there is just a single pass over the data,X
, while the original Freund & Schapire, 1995 paper suggests looping forT
iterations, refitting the classifiers on each pass based on the evolving weights. Looks like the version here is based on Zhu, et al 2009. Might be worth a few words to explain the source of the algorithm, and also why this version needs to make only one pass over the samples.Second, just from a learning perspective, it would be great to provide a data set that mimics the illustrations in the video, just so we can verify that things work as expected. For extra credit, use MatPlotLib to create the decision boundary visualization from the video.
Third, it might be worthwhile pointing out refinements a real design would need. For example, here are the decision stubs created from the test code. Notice that feature 23 is used twice: same polarity, just different threshold. Is this a limitation of this simple example, or actually a useful quirk of AdaBoost?
The text was updated successfully, but these errors were encountered: