3. LUPI
V.Vapnik (2006)
V.Vapnik, A.Vashist (2009)
https://doi.org/10.1016/j.neunet.2009.06.042
https://ieeexplore.ieee.org/abstract/document/1635748
“can the generalization performance be improved using the privileged information?”
4. Machine Learning * LUPI
LUPI: (X,X*
,y) → F(X)
X*
- privileged information
X*
- may not be available for all rows in X
6. Types of PI:
● information from the future
○ signal
○ features
● additional modality
○ captions for images
● additional intermediate concepts (labels)
7. X*
:
● new data itself
○ X*SVM+
● X*
i
= margin: xi
vs hyperplane from SVM(X*
,y)
○ dSVM+ , (“margin transfer”)
10. “WSVM vs SVM+”
by M.Lapin ,
M.Hein, B.Schiele
(2014)
https://arxiv.org/pdf/1306.3161.pdf
“can the generalization performance be improved using only the sample weights as the
privileged information?”
11. ● WSVM outperforms SVM+
● SVM+ solution is unique, WSVM is not
● weights for WSVM can be inferred from a given SVM+
solution
● SVM+ is a subset of WSVM
● WSVM weights can (theoretically) be learned just as any
other hyperparameters
13. “Understanding LUPI” by
A.Momeni , K.Tatwawadi, (2018)
https://web.stanford.edu/~kedart/files/lupi.pdf
“can the generalization performance of NN be improved using LUPI-like approach”
14. 1. Train a function (NN): P(y=1,x*)
2. Use it to get per-sample learning rate during the training
of another (final) NN: P(y=1,x)
3. Profit (+3%)
15.
16. “Boosting with Side Information”
J.Chen, X.Liu and S.Lyu
(2012)
http://www.cse.msu.edu/~liuxm/publication/Chen_Liu_Lyu_ACCV12_Sideinfo.pdf
“can the generalization performance of Boosted decision stumps be improved using PI?”
17. 1. Boosting decision stumps
2. For stumps of the current iteration which are using X*
:
a. train a “replacement”: f(x)->x*
b. use the replacement instead of the original feature
within the stump
3. Profit
20. ● https://arxiv.org/pdf/1805.11614.pdf (2018)
○ Deep Learning: dropout parametrized by f(x*
)
● https://papers.nips.cc/paper/3960-on-the-theory-of-learnining-with-privileged-information.
pdf (2010)
○ privileged empirical risk minimization for learning rate boost
● https://calculatedcontent.com/2014/11/05/learning-using-privileged-information-weighted-
svms/
○ overview blog-post about LUPI
● https://www.youtube.com/watch?v=YRtfKosPHd0
● https://www.simonsfoundation.org/event/march-12-2014-learning-with-a-nontrivial-teache
r/
○ LUPI talks by V.Vapnik
● http://www.jmlr.org/papers/volume16/vapnik15b/vapnik15b.pdf (2015)
○ further fundamental improvements on LUPI: knowledge transfer and similarity
control
● http://users.sussex.ac.uk/~nq28/pubs/ShaQuaLam13.pdf
○ ranking images with LUPI (SVM)