Informations
Course beginning:Febr. 26, 2020
Duration 13 weeks.
Lesson timetable
Tuesday 10:00 - 14:00
Room
20,
SPV
Wednesday 08:00 -
12:00
Room 20, SPV
Tools
Links
International labs link
|
Motivation
The main objectives of the course “Machine Learning” ( ML) is to introduce students to state-of-the-art
methods and basic programming tools for analysis of complex and big-data set.
The ML course provides theoretical and practical basic-tools for the study
and determination of adaptive and machine learning algorithms: linear,
nonlinear, supervised, unsupervised for static (or memoryless)
applications: regression, classification and clustering; and dynamic
application: adaptive filtering, modeling and
prediction of complex physical phenomena. In particular, are provided: the
theoretical and practical basic-tools for linear adaptive filtering in
different application; the formal statistical methods for performance
analysis of ML algorithms and the basic tools for the algorithm development
on vector and parallel architectures and large-scale distributed environment.
A secondary objective of the ML course is to teach students how to
develop, and evaluate, simple ML or adaptive filtering algorithms in various
application domains such as: multimedia and multimodal communications,
biological, biomedical, sound, telecommunications, remote sensing, social
networks, internet, big-data, industry 4.0, etc. Therefore, every
student will lead a project that is based on machine learning
state-of-the-art research.
Objectives of the Course
The
student acquires the basic theory and is able to design, implement and
evaluate the performance of, most common machine learning algorithms, also on
parallel machines with different grain on several application contexts.
Main topics
Introduction to Adaptive Computation and Machine
Learning
Bayesian Approach to Adaptive Computation
Basic Supervised and Unsupervised Algorithms for
Pattern Recognition, Regression and Clustering
Solution of Underdetermined Systems of Linear
Equations with Minimal L1- L2- Norm
Multilayer Neural Networks
Recurrent Neural Networks
Kernel Method and Regularized Networks
Dynamic Stochastic Neural Networks and Probabilistic
Graphical Models
Deep Neural Networks Architecture, Learning and
Applications
References
Textbooks
- Aurelio
Uncini, “Mathematical Elements for
Machine Learning”, Ed. 2020
(free pdf available only for the students).
- Aurelio Uncini,
"Fundamentals of Adaptive Signal Processing" - Springer, ISBN
978-3-319-02806-4, Feb. 2015
- S. Scardapane,
D. Comminniello, M. Scarpiniti,
A. Uncini, “Designing Lerge Machine Learning Simulations Using the Lynx
Toolbox".
Other recommended reading
- Kevin P. Murphy,
“Machine Learning: A Probabilistic Perspective”,
Adaptive Computation and Machine Learning series, MIT Press book
- Sergios Theodoridis,
“Machine Learning: A Bayesian and Optimization Perspective”,
1st Edition - Elsevier 2015.
- Ia Goodfellow
and Yoshua Bengio
and Aaron Courville, "Deep
Learning", MIT Press book, Ed 2018.
- Dimitri P. Bertsekas
and John N. Tsitsiklis, “Parallel and
Distributed Computation: Numerical Methods,” ISBN 1-886529-01-9
- Rumelhart, D.E., Hinton, G.E., &
McClelland, J.L. (1986). “A General Framework for Parallel
Distributed Processing”, In Rumelhart,
D.E., & McClelland, J.L. and the PDP Research Group (1986) Eds.
Parallel Distributed Processing: Explorations in the Microstructure of
Cognition. Volume 1: Foundations. MIT Press: Cambridge, MA.
|