When machines learn, what do they actually learn? How can we know for sure? Can there be a better way to learn?
For my dissertation research I have been trying to answer questions like these. If you are interested, join me in the IUI workshop for Explainable Smart Systems in L.A. this March. I’m presenting learn2Sign:a feedback driven technique to learn Sign Language.
Within Computer Science, my interests are in Artificial Intelligence, Computer Vision, Gesture Recognition, Sign Language Recognition and Natural Language Processing. I started to learn some American Sign Language myself, now that is fun!
Outside of Computer Science, I like to be informed on topics in Economics, Neuroscience, Social Psychology among others. (goodreads) I also like to think of myself as an avid hiker and an amateur photographer. (flikr)
Currently the projects I am actively involved in are:
- DyFAV: Dynamic Feature Selection and Voting for Real-time Recognition of Fingerspelled Alphabet using Wearables (pdf)
- Learn2Sign: A interactive tutor for Sign Language Learning
- SignType: A novel HCI method for interacting with computers.
- GRUSI: Recognizing human actions and gestures using representative and human understandable Images.
- MirrorGen: Using human movement models in Unity for accurate gesture recognition using sensors
- ClassroomVR: Towards making an accessible VR University environment
Some of them are still in the pipeline for being published so more details to come! But let me know if you are curious
Heres’ is a video from back when I started with Sign Language Recognition! Here is an article about it in MIT Tech Review
If you have a cool project idea, or would like to collaborate on my existing work, send me an email and lets talk specially if it has to do with XR or gestures.
ppaudyal at asu dot edu
- Paudyal, Prajwal, Ayan Banerjee, and Sandeep KS Gupta, SCEPTRE: a Pervasive, Non-Invasive, and Programmable Gesture Recognition Technology. Proceedings of the 21st International Conference on Intelligent User Interfaces. ACM, 2016, March 2016 (pdf)
- Dyfav: Dynamic feature selection and voting for real-time recognition of fingerspelled alphabet using wearables. Prajwal Paudyal, Junghyo Lee, Ayan Banerjee, Sandeep KS Gupta. Proceedings of the 22nd International Conference on Intelligent User Interfaces
- FIT-EVE&ADAM: Estimation of Velocity & Energy for Automated Diet Activity Monitoring. Junghyo Lee, Prajwal Paudyal, Ayan Banerjee, Sandeep KS Gupta. Machine Learning and Applications (ICMLA), 2017 16th IEEE International Conference on
- Mt-diet: Automated diet assessment using myo and thermal. Junghyo Lee, Ayan Banerjee, Prajwal Paudyal, Sandeep KS Gupta. Late-Breaking Research Abstract at the conference on Wireless Health
- IDEA: Instant Detection of Eating Action using Wrist-Worn Sensors in Absence of User-Specific Model. Junghyo Lee, Prajwal Paudyal, Ayan Banerjee, Sandeep KS Gupta. Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization
- Mt-diet: Automated smartphone based diet assessment with infrared images. Junghyo Lee, Ayan Banerjee, Sandeep KS Gupta. 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)
I’ll pull my publications here soon, but meanwhile here is a link to my Google Scholar’s page. It is pretty up to date except for a latest Journal published at TiiS
- Outstanding Research Award, 2016 GPSA
- Caukin’s Communication Award, 2016 for facilitating communication and collaboration in graduate students
- Jumpstart Research Grant 2017
- Other Travel awards from ACM, SIGCHI, SIGAI, CIDSE (ASU)
- CIDSE Graduate Grants