Checking date: 13/05/2019


Course: 2019/2020

(18496)
Study: Bachelor in Sound and Image Engineering (214)


Coordinating teacher: GONZALEZ DIAZ, IVAN

Department assigned to the subject: Department of Signal and Communications Theory

Type: Electives
ECTS Credits: 3.0 ECTS

Course:
Semester:




Students are expected to have completed
The students are expected to have studied Linear Systems Although not mandatory, basic knowledge on Digital Image Processing is welcome.
Competences and skills that will be acquired and learning results. Further information on this link
Competences and skills that will be acquired by the students are: - Basic Compentences: CB1, CB2 - General: Competences: CG3, CG10, CG11 - Specific Competences: ETEGISA1, ETEGISA5, ETEGISC6, ETEGT1, ETEGITT3 Learning Results and their relation with course contents - To learn digital images and the spatial filtering operation over images. - To know basic concepts of Machine Learning: loss functions, regularization, hyperparameters, data augmentation, etc. - To understand deep neural networks and their training algorithms: gradient descent and back-propagation. - To learn Convolutional Neural Networks (CNN) and their most usual processing blocks/layers. - To understand, design and train CNN architectures for image classification. - To understand, design and train advanced CNN architectures to address other task of visual recognition: object detection, image captioning, image segmentation, image synthesis, etc.
Description of contents: programme
Unit 1. Basic concepts of visual recognition 1.1 Digital Images 1.2 Spatial Filtering 1.3 Part-models for object recognition Unit 2. Basic concepts of Deep learning 2.1 Machine Learning algorithms 2.2 Loss Functions 2.3 Regularization 2.4 Hyperparameters and validation 2.5 Deep Neural Networks 2.6 Gradient Decent-based learning algorithms 2.7 Backpropagation Unit 3 Convolutional Neural Networks (CNNs) for image classification 3.1 Introductions 3.2 Basic processing layers in a CNN 3.3 CNN architectures for image classification 3.4 Training a CNN for image classification: data pre-processing, data augmentation and initialization Unit 4 Deep networks for other image-related tasks: 4.1 Networks for object detection 4.2 Networks for image segmentation 4.3 Networks for image matching 4.4 Networks for image captioning 4.5 Networks for image synthesis
Learning activities and methodology
Two teaching activities are proposed: lectures and lab sessions. LECTURES The lecture sessions will be supported by slides or by any other means to illustrate the concepts explained. In these classes the explanation will be completed with examples. In these sessions the student will acquire the basic concepts of the course. It is important to highlight that these classes require the initiative and the personal and group involvement of the students (there will be concepts that the students themselves should develop). LABORATORY SESSIONS This is a course with a high practical component, and students will attend to laboratory sessions very often. In them, the concepts explained during the lectures will be put into practice using deep learning software libraries (eg pytorch). In the laboratory, machines equipped with high-performance GPUs are available and free distributed computing systems such as Google Colab will also be used.
Assessment System
  • % end-of-term-examination 60
  • % of continuous assessment (assigments, laboratory, practicals...) 40
Basic Bibliography
  • Francois Chollet. Deep Learning with Python. Manning Publications. 2017
  • Ian Goodfellow, Yoshoua Bengio, and Aaron Courville. Deep Learning. The MIT Press. 2016
Additional Bibliography
  • Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer. 2006
  • Forsyth & Ponce. Computer Vision. Pearson. 2012

The course syllabus and the academic weekly planning may change due academic events or other reasons.