Zeeshan Zia

Zeeshan Zia
Redmond, WA 98052
zeeshan{@}retrocausal{.}ai

CEO and Co-Founder
Retrocausal Inc.

Retrocausal Logo

We are building the world's first truly scalable visual activity understanding platform at Retrocausal. We have started out with the manufacturing assembly vertical, where we are seeing a ton of interest from the biggest manufacturers. I have had the opportunity to visit more than two dozen factories and work alongside operators and industrial engineers. Our technology has won awards from NASA JPL, Embedded Vision Alliance, IEEE VSTG, and CB Insights, and raised $4.5M from institutional and corporate investors.

In a past life, I authored more than 20 peer-reviewed publications on visual activity understanding, 3D scene understanding, learning from synthetic data, and machine learning systems, which were cited by every major lab from MIT, Stanford, and CMU, to Apple, Google, and Baidu.

I am also an inventor on almost 20 US patents or patent applications, and have shipped AI-first products including Microsoft HoloLens, Vuforia AR platform, MVTec Halcon, and camera based self-driving systems.

Curriculum Vitae

External Links

Industry

Microsoft Research
Redmond, WA
...
Senior Scientist (HoloLens)
2017-2019

NEC Laboratories America
Cupertino, CA
...
Researcher
2015-2017

Qualcomm Research
Vienna, Austria
...
Research Intern
Summer 2013

Siemens Corp. Technologies
Munich, Germany
...
Engineering Intern
Summer 2008

Academia

Postdoc
Imperial College London
...
London, UK
2014-2015

PhD
Swiss Federal Institute of Technology
...
Zurich, Switzerland
2009-2013

MS
Munich University of Technology
...
Munich, Germany
2007-2009

Selected Recent Publications see all...

  • S. Kumar, S. Haresh, A. Ahmed, A. Konin, M.Z. Zia, Q.H. Tran. Unsupervised Action Segmentation by Joint Representation Learning and Online Clustering.CVPR2022Conference
    We present a novel approach for unsupervised activity segmentation which uses video frame clustering as a pretext task and simultaneously performs representation learning and online clustering. This is in contrast with prior works where representation learning and clustering are often performed sequentially. We leverage temporal information in videos by employing temporal optimal transport. In particular, we incorporate a temporal regularization term which preserves the temporal order of the activity into the standard optimal transport module for computing pseudo-label cluster assignments. The temporal optimal transport module enables our approach to learn effective representations for unsupervised activity segmentation. Furthermore, previous methods require storing learned features for the entire dataset before clustering them in an offline manner, whereas our approach processes one mini-batch at a time in an online manner. Extensive evaluations on three public datasets, ie 50-Salads, YouTube Instructions, and Breakfast, and our dataset, ie, Desktop Assembly, show that our approach performs on par with or better than previous methods, despite having significantly less memory constraints.
    @inproceedings{kumar22cvpr,
     author = {S. Kumar and S. Haresh and A. Ahmed and A. Konin and M.Z. Zia and Q.H. Tran},
     title = {Unsupervised Action Segmentation by Joint Representation Learning and Online Clustering.},
     booktitle = {CVPR},
     year = {2022}
    }
  • S. Haresh, S. Kumar, H. Coskun, S.N. Syed, A. Konin, M.Z. Zia, Q.H. Tran. Learning by Aligning Videos in Time. CVPR2021Conference
    TWe present a self-supervised approach for learning video representations using temporal video alignment as a pretext task, while exploiting both frame-level and video-level information. We leverage a novel combination of temporal alignment loss and temporal regularization terms, which can be used as supervision signals for training an encoder network. Specifically, the temporal alignment loss (ie, Soft-DTW) aims for the minimum cost for temporally aligning videos in the embedding space. However, optimizing solely for this term leads to trivial solutions, particularly, one where all frames get mapped to a small cluster in the embedding space. To overcome this problem, we propose a temporal regularization term (ie, Contrastive-IDM) which encourages different frames to be mapped to different points in the embedding space. Extensive evaluations on various tasks, including action phase classification, action phase progression, and fine-grained frame retrieval, on three datasets, namely Pouring, Penn Action, and IKEA ASM, show superior performance of our approach over state-of-the-art methods for self-supervised representation learning from videos. In addition, our method provides significant performance gain where labeled data is lacking.
    @inproceedings{coskun19arxiv,
     author = {S. Haresh and S. Kumar and H. Coskun and S.N. Syed and Andrey Konin and M.Z. Zia and Q.H. Tran},
     title = {Learning by Aligning Videos in Time},
     booktitle = {CVPR},
     year = {2021}
    }