Nie jesteś zalogowany | zaloguj się

Wydział Matematyki, Informatyki i Mechaniki Uniwersytetu Warszawskiego

  • Skala szarości
  • Wysoki kontrast
  • Negatyw
  • Podkreślenie linków
  • Reset

Aktualności — Wydarzenia

Seminarium badawcze Zakładu Logiki: Wnioskowania aproksymacyjne w eksploracji danych

 

Lifelong Machine Learning


Prelegent: Hung Son Nguyen

2019-05-31 14:15

The current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a ML algorithm only on the dataset to produce a model. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Although this isolated ML paradigm, primarily based on datadriven optimization, has been very successful, it requires a large number of training examples, and is only suitable for well-defined and narrow tasks in closed environments. In contrast, we humans learn effectively with a few examples and in the dynamic and open world because our learning is also very much knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort and adapt to new/unseen situations.

Lifelong Machine Learning is one of the newest machine learning paradigm that learns continuously, accumulates the knowledge learned in the past, and uses/adapts it to help future learning and problem solving. This research area integrates techniques from multiple subfields of Machine Learning and Artificial Intelligence, including incremental learning, transfer learning, multi-task learning, online learning, and knowledge representation and maintenance.

In this talk I would like to share some most recent knowledge and information about this approach. The presentation is prepared on the base of the recent book on Lifelong Machine Learning by Zhiyaun Chen and Binh Liu (2018) and the tutorial materials of the same authors during KDD-2016 and IJCAI-2015.