I'm a Reader (Associate Professor) at the School of Informatics in the University of Edinburgh. My research focuses on understanding the visual world in motion: what video tells us about the world, its properties, its state and how to estimate it. This broadly includes video representations, high-level perceptual tasks that involve video, joint learning of video and language and the machine learning problems that all these problems pose.
WHAT'S NEW?
- Congrats to Shreyank for his paper at the EFM Workshop at ECCV '24!
- Congrats to Anil for his paper at ECCV 2024!
- I'll be a Reader (Associate Professor) starting in August 2024. Thanks to everyone who's been part of the process, especially my students.
- Congrats to Gen for his paper at CVPR 2024!
- Happy to serve as AC for ECCV 2024.
- Congrats to Dr. Shreyank N Gowda, for his excellent thesis and his postdoctoral fellowship at Oxford.
- Congrats to Davide and Gen for their papers at CVPR 2023!
- Congrats to Anil, Davide, Kiyoon and Shreyank for having 3 papers accepted to BMVC '22!
- I will be co-organizing the workshop on "What is Motion For?" at ECCV '22
- Congrats to Shreyank for having two papers accepted to ECCV '22!
- Happy to receive the Google Scholar Award 2022!
- I'll be serving as Program Chair at BMVC 2021.
- Congrats to Gen for having his paper accepted to CVPR 2021!
- Congrats to Shreyank for having his paper accepted to AAAI 2021, where we show that with the SMART selection of just a few frames you can get better action recognition than using the entire video.
- We present the paper "Only Time Can Tell" at WACV 2021.
- Congrats to Shreyank for having his paper accepted to BMVC 2020, presenting his method ALBA, where we show that Reinforcement Learning improves greatly over gradient descent on following objects over time. Watch the video here.
- Welcome Gen, Jack and Anil who join our team.
- I'm serving as Area Chair for CVPR 2021.
- I'm co-organizing a workshop at ICLR 2020 on Computer Vision for Agriculture, with Yannis Kalantidis, Ernest Mwebaze and Dina Machuve.
- If you're looking for a benchmark to test how good your model is at capturing temporal information in videos, try our new Temporal Dataset, featured in the Facebook AI blog.
- Our paper describing FASTER was just accepted to AAAI 19! Congrats Linchao. FASTER maintains the accuracy of state-of-the-art in video classification while reducing an order of magnitude the computation time.
- I'm co-leading with Yannis Kalantidis the very exciting CVPR Worksop on "Computer Vision for Global Challenges" , where we explore novel vision problems to humanitarian, development, and other global domains. Check our video below!
- We're presenting the tiny DMC-Net for Action Recognition at CVPR '19. Everything you wanted in an action recognition network, for a fraction of the cost.
- Kiyoon Kim and Shreyank Gowda to join the lab and start their PhD in September.
I'm always looking for very motivated, independent and talented PhD students.