I have degrees in Electronics and Communications (B.Tech; 2014), Robotics (MS; 2016) and Computer Science (PhD; 2020).
My love is Computer Vision and I have worked with a variety of vision sensors. And my focus is humans as an input. Humans are a difficult subject for artificial systems to observe accurately (body posture, expressions, eye gaze), let alone understand (intention, emotional state, engagement, wakefulness). I am Interested in creating technologies that can support research into better understanding of people, and in turn improve the way AI systems interact with us on a day-to-day basis, improving the quality of life.
I would like for AI systems to also be more sustainable in terms of usage of compute and power requirements. I try to work towards modules that can run fast and accurately, while also being a small part of a bigger system where all modules would need resources, on a stand-alone computational system. And I would like AI models to train faster, more easily and with less compute.
Now, my tools of choice is Neuromorphic Vision sensors (AKA event cameras), Deep Neural Networks, classic and latest Computer Vision methodologies. I have experience in Action Recognition, Human Pose Estimation and Artificial Deep Learning, and I am recently enthusiastic about Graph Neural networks and eye-tracking.