ICCVG 2020
monday, 19 November 2018
switch to english

Sign in

Registered user

Login

Password

Home > Description

Invited Lectures 2018 (this information pertains to ICCVG 2018 and will be modified in due time)

Prof. Nicolai Petkov

University of Groningen
Faculty of Science and Engineering

Intelligent Systems research group


Prof. Alfred M. Bruckstein
Ollendorff Professor of Science

Technion - Israel Institute of Technology
Computer Science Department

Geometric Image Processing Lab


Prof. Ivan Laptev

École normale supérieure
Computer Science Department

WILLOW Computer vision and machine learning research laboratory


Dr Zygmunt Ladyslaw Szpak

University of Adelaide
School of Computer Science

Australian Centre for Visual Technologies


Lecture title:
Representation Learning with Trainable COSFIRE Filters


Lecture title:
Engravings and One Liners: On Non-Photorealistic Renderings


Lecture title:
Towards Action Understanding with Less Supervision


Lecture title:
A Comprehensive Image Formation Model as a Tool for Parameter Estimation at the Limits of Resolution


Summary

In order to be effective, traditional pattern recognition methods typically require a careful manual design of features, involving considerable domain knowledge and effort by experts. The recent popularity of deep learning is largely due to the automatic configuration of effective early and intermediate representations of the data presented. The downside of deep learning is that it requires a huge number of training examples.

Trainable COSFIRE filters are an alternative to deep networks for the extraction of effective representations of data. COSFIRE stands for Combinations of Shifted Filter Responses. Their design was inspired by the function of certain shape selective neurons in areas V4 and TEO of visual cortex. A COSFIE filter is configured by the automatic analysis of a single pattern. The highly non-linear filter response is computed as a combination of the responses of simpler filters, such as Difference of (color) Gaussians or Gabor filters, taken at different positions of the concerned pattern. The identification of the parameters of the simpler filters that are needed and the positions at which their responses are taken is done automatically. An advantage of this approach is its ease of use as it requires no programming effort and little computation — the parameters of a filter are derived automatically from a single training pattern. Hence, a large number of such filters can be configured effortlessly and selected responses can be arranged in feature vectors that are fed into a traditional classifier.

Summary

We discuss the topic of "reverse engineering" artistic renderings, like classical techniques of engraving, that culminated in Durer's amazing art, and one-liner drawings, exemplified by modern works of Calder, Cocteau and Picasso. The necessary machine vision methods, involving depth from images, and advanced processes of edge detection and integration are available, and yield interesting rendering results, however the challenge in emulating art remains quite formidable.

Summary

Next to the impressive progress in static image recognition, action understanding remains a puzzle. The lack of large annotated datasets, the compositional nature of activities and ambiguities of manual supervision are likely obstacles towards a breakthrough. To address these issues, this talk will present alternatives for the fully-supervised approach to action recognition. First I will discuss methods that can efficiently deal with annotation noise. In particular, I will talk about learning from incomplete and noisy YouTube tags, weakly-supervised action classification from textual descriptions and weakly-supervised action localization using sparse manual annotation.

The second half of the talk will discuss the problem of automatically defining appropriate human actions and will draw relations to robotics.

Summary

At the limits of camera resolution, objects are blurred, tiny, and almost unrecognisable. To extract relevant knowledge from such limited data, one needs to have some weak prior information about the nature of the object under observation, and a comprehensive mathematical model of the image formation process. The canonical mathematical models of the image formation process make numerous simplifying assumptions which are inadequate when one needs to work at the limit of the sensor resolution. For example, the standard image formation model does not take diffraction and other sources of image blur into account.

When working at the limits of sensitivity, one needs to be able to model what happens at the sub-pixel level, and this means that one needs to take the details of the photosensitive area within each pixel into account. I will detail the development of a more sophisticated and tractable mathematical model of the image formation process, which can facilitate precise curve fitting to limited and blurred pixel data. The model incorporates the effects of several elements—the point-spread function, the discretisation step, the quantisation step, and photon noise. I will demonstrate the utility of the model on the real-world problem of elliptic landmark estimation at the limits of resolution. Comprehensive experimental results on simulated and real imagery will show that the approach yields parameter estimates with unprecedented accuracy.

back           
Asociation for Image Processing Faculty of Applied Informatics and Mathematics, Warsaw University of Life Sciences Polish-Japanese Academy of Information Technology Faculty of Information Science, West Pomeranian University of Technology Springer, Lecture Notes in Computer Science

Association for Image Processing

Faculty of Applied Informatics and Mathematics

Photo of Warsaw by www.zdjeciawarszawy.pl

v. 2012.5.1.1 eConf © 2008-2016 Piotr Ku¼niacki

Last modification 2018-11-03

Site counter 36922 Online users 4