How can ML and Applied Science Interviews be SOOOO much Harder than SWE Interviews?
How can ML and Applied Science Interviews be SOOOO much Harder than SWE Interviews?
The landscape of technical interviews can be daunting, especially when comparing roles in Machine Learning (ML) and Applied Science to those in Software Engineering (SWE). Recently, I had the opportunity to go through the final five rounds of an Applied Science interview with Amazon, and the experience was an eye-opener. Here’s a breakdown of the process and why I believe ML interviews can be significantly more challenging than typical SWE interviews.
The Interview Structure
The interview consisted of five rigorous one-hour rounds held in a single super-day format:
-
ML Breadth: This round covered a wide range of topics in classical ML and deep learning (DL), including math derivations. The depth of knowledge required here is immense, covering almost every corner of the ML field.
-
ML Depth: In this round, the focus was on my general research area, accompanied by intense questioning. The depth of grilling can be quite overwhelming, especially given the vastness of the ML domain.
-
Coding: This round involved coding ML algorithms and solving LeetCode medium-level problems. It’s essential to have the coding skills equivalent to a mid-level SWE engineer.
-
Science Application: Here, I needed to demonstrate my ability to apply ML systems to solve broad problems, showcasing my practical understanding of ML concepts.
-
Behavioral: The final round was a 1.5-hour deep dive into Amazon’s leadership principles, conducted by a Bar Raiser. This round is crucial for assessing cultural fit and alignment with the company’s values.
The Knowledge Requirement
To succeed in these interviews, one must possess extensive knowledge across a seemingly infinite number of concepts in ML. This includes not just theoretical understanding but also the ability to recall and reproduce complex mathematical concepts accurately. As someone who struggles with memory and recall, I found this aspect particularly daunting. Additionally, even within one’s area of research—often a vast field—it’s easy to encounter questions or topics that one may not be familiar with.
Comparison with SWE Interviews
In contrast, the requirements for an SWE role, even at prestigious companies like Amazon, often boil down to:
- LeetCode Practice: Mastering common algorithms and data structures.
- System Design: Required primarily for senior positions, focusing on the architecture of software solutions.
As someone who excels at LeetCode-style problems, I find the ad-hoc thinking and problem-solving aspects of SWE interviews to be more straightforward. With enough practice, candidates can familiarize themselves with the most common patterns and types of questions.
The Challenge of ML Interviews
The stark difference lies in the breadth and depth of knowledge required for ML interviews. For instance, I was tasked with recalling obscure theoretical details about concepts like soft-margin Support Vector Machines, discussing the challenges of Reinforcement Learning from Human Feedback (RLHF) in aligning large language models (LLMs) to human preferences, and coding a sparse attention mechanism in PyTorch—all within a short timeframe. This level of depth is often not required in SWE interviews.
Compensation vs. Difficulty
One of the most frustrating aspects is that despite the extensive knowledge and hard work needed to prepare for ML interviews, the compensation often mirrors that of SWE roles. The job itself can be significantly more challenging due to the variety of tasks and skills required, whereas SWE roles typically allow for deeper expertise in a specific tech stack.
Community Insights
Many in the ML community have echoed similar sentiments. One ML engineer shared their experience of feeling overwhelmed by the need to juggle LeetCode practice, classical ML theory, and the intricacies of LLMs. Another candidate expressed a preference for ML rounds over LeetCode rounds, finding the former more aligned with their strengths. The consensus seems to be that while ML interviews can be challenging, some candidates feel more at home in those discussions than in abstract coding problems.
Conclusion
It’s evident that the landscape of technical interviews is evolving, and while all IT jobs can be challenging, the demands placed on candidates for ML and Applied Science roles are substantial. As I navigate through this process, I remind myself that every challenge is an opportunity for growth. For those currently preparing for similar interviews, best of luck! Remember, you’re not alone in this journey, and hopefully, things will get better soon.
This markdown blog post encapsulates the author’s interview experience, the challenges faced, and insights from the community, providing a comprehensive view of the differences between ML and SWE interviews while fostering a sense of camaraderie among candidates.