ML platforms help enable intelligent data-driven applications and maintain them with limited engineering effort. Upon sufficiently broad adoption, such platforms reach economies of scale that bring greater component reuse while improving efficiency of system development and maintenance. For an end-to-end ML platform with broad adoption, scaling relies on pervasive ML automation and system integration to reach the quality we term self-serve; a quality we define with ten requirements and six optional capabilities.
With this in mind, we identify long-term goals for platform development, discuss related tradeoffs and future work. Our reasoning is illustrated on two commercially-deployed end-to-end ML platforms that host hundreds of real-time use cases at Meta — one general-purpose and one specialized.
I am currently a Research Engineer on the Adaptive Experimentation team, and previously led the Personalized Experimentation team at Meta. My current work focuses on lowering the barrier of entry to ML and improving outcomes via AutoML practices. I am interested in causal inference, interpretable/explainable machine learning, and Bayesian Optimization.
Prior to Meta, I worked in the intersection of machine learning and healthcare, and am passionate about applications of AI for science. Specifically, my group researched the development of a novel imaging modality to improve intra-operative decision making in a neurosurgical setting via computer vision methods.