


We kicked off a discussion in the last lesson - Objective function - AI is nothing but an optimization problem. We discussed the definition of objective function in machine learning. Essentially, it is a scorecard that is required for a Machine Learning Model to know how well it is doing at a task. We also briefly discussed how important it is to align the Business Objective with the Machine Learning Objective function but quite often it does not happen, why so?
When companies kick off a machine learning project, especially during today’s times when there is a gold rush to establish oneself as an AI company, the tech and product teams start with AI in mind rather than starting from where it should have started - A user or a business problem.
In this lesson, we will learn to do it the right way and we will also introduce the concept of PRD - Product Requirement Document, a wildly misused or unused tool that is needed to align people on a common mission.
Even though this stage is tagged as Business understanding, it can very well be called User / Business understanding. It is captured in a Product Requirement Document (PRD). I will write a detailed article at some other time to walk you through various components of a PRD but for now, let us quickly look at the main components of a PRD. This should be the focal point of the rest of the framework.
Step 5.1. Breaking down the User/Business problem into sub-problems to answer is it solvable.
In the previous section, we identified the problem statement (User or Business problem) that needs to be solved. However, time and again you and your team will pick up problem statements that may seem unsolvable. While a situation like this is not specifically tied to AI/ML, this problem becomes more daunting in the context of AI/ML. Why so? It is because:
When faced with this situation, it helps a lot to apply the power of the first principle and break down problem statements into subproblems.
The First Principle approach encourages us to break down complex problems into their fundamental truths, questioning assumptions, and re-examining the underlying principles. By stripping away layers of complexity, we gain clarity and insight into the problem at hand. This will eventually help us convert the problem into more management subproblems. This lends itself to tackling each aspect individually, gradually building towards a solution.
Each sub-problem becomes a stepping stone on the path to solving the larger issue. With careful analysis and iteration, we address one sub-problem at a time, leveraging the power of AI and machine learning algorithms where applicable.
An example of an AI/ML problem statement that was solved by employing first principles thinking and breaking down the problem into smaller components is "Autonomous Vehicle Navigation in Complex Urban Environments."
Initially, engineers faced the challenge of creating AI/ML algorithms capable of safely navigating autonomous vehicles through densely populated cities with numerous obstacles, unpredictable traffic patterns, and diverse road conditions. The problem seemed overwhelming, as conventional approaches struggled to address the complexities involved.
However, by applying first principles thinking, engineers broke down the problem into smaller, more manageable components:
By breaking down the autonomous vehicle navigation problem into these smaller components and addressing them individually, engineers were able to overcome the challenges associated with navigating complex urban environments. This approach enabled them to develop robust AI/ML solutions capable of safely and efficiently guiding autonomous vehicles through city streets, contributing to advancements in self-driving technology.
Step 5.2. Sequence the sub-problems & pick the first sub-problem to solve
Once we have broken down the problem statement into smaller sub-problems, we can sequence them in the order of dependency or complexity.
Step 5.3. Re-assess whether the problem is solvable otherwise go back to Step 5.1 and repeat this. If the sub-problems are not solvable, in all likelihood we did not apply the first principle right in Step 5.1.
Step 5.4. Framing Machine learning function by mapping Input to Output given Data D such that Loss Function is satisficed
We know from Module 1 Lesson 3 here that Machine learning is essentially finding a mapping function F(x) so that F(x) maps Input to the desired output.
This step involves:
5.4.1 Understanding the Input (I): For example when we wanted to predict at what price a house would sell for, we needed some input. Do you remember what it was? It was the identifier of the house say House Address. But was that enough? We also needed to provide the features of the house such as SQ Ft, #BR, #BA, Location, etc.
5.4.2 Outlining the Desired Output (O). In this case, it was ‘The price at which the house will sell for in $’.
5.4.3 Understand the required context Data (D): All AI model requires Data, which acts as a context from which the machine learns. In the above case, we would need data of comparables, houses with similar features (Similar SQ Ft, #BR, #BA, Location, etc.), and prices for which they got sold in the market. We will revisit Data in Module 2 Lesson 2. But at the first stage, we should have a pretty good idea of whether we have the context data D required for the machine to learn the function. If we don't have this knowledge then we need to increase our intimacy with in-house data.
5.4.4 Framing the Scenarios S: I → O that maps input to output s.t. we minimize a Loss (Objective) Function L(S): The next part is to put in writing that we need to develop a function such that for every Scenario S there is a mapping between I and O. However, this mapping should be under a mathematical constraint that we are going to measure by Loss Function L(S).We discussed this in Module 2 Lesson 1 here. Let us understand this with two examples:
We will answer this question later in this series as we need to build some more foundational understanding of the Machine learning models.
Step 5.5. Rule-based Model or AI? Evaluate whether ML is the right approach
We already know that we should not be looking to implement the ML model just for the sake of it. Some business problems don’t need ML as simple business rules can do a much better or equally good job. For other business problems, there might not be sufficient data to apply ML as a solution or ML is an overkill when Rule based model can do the job. We discussed this in detail in Module 1 Lesson 3 here.
Step 5.6. ML Objective → Business Objective such that Business Objective is satisficed
Map the technical outcome to a business outcome. Let us revisit the business outcome in the above two examples:
One thing to note is that above we said ‘Satisficing Criteria’. Many decisions in Machine learning problems in a real-world setting will fall into what we call a Satisficing decision rather than a satisfying decision.
A satisficing metric is a measurement or criterion used in decision-making that aims to find an acceptable solution rather than an optimal one. Unlike optimization metrics, which strive to find the best possible outcome, satisficing metrics focus on identifying solutions that meet a predefined threshold of acceptability or sufficiency.
We discuss this at length in the article - Metrics: ‘Satisficing’ Metric — Not all metrics need to be optimized. This acts as a reminder to the product and technology team to not optimize the model unnecessarily if there is a marginal improvement in business metrics.
Step 5.7. Align Data Engineering, Data Science, and Business Executives on 1-5 so that DS/DE leads can take on the next part of the framework
Many AI projects fail not because of a lack of a technical toolkit to solve the problem but because of a lack of alignment between internal stakeholders. The last phase in the stage is a reminder to all leaders and participants to ensure a collaborative approach in the journey. As they
When you have to walk fast go alone, when you have to go far go together. Walking together is always the right answer in the long run so build alignment early and often.
In the next lesson, we will discuss the next phase in our Machine Learning Development Framework - Data. Data is the new currency so let us see what needs to be in place to extract the maximum out of it.
As a photographer, it’s important to get the visuals right while establishing your online presence. Having a unique and professional portfolio will make you stand out to potential clients. The only problem? Most website builders out there offer cookie-cutter options — making lots of portfolios look the same.
That’s where a platform like Webflow comes to play. With Webflow you can either design and build a website from the ground up (without writing code) or start with a template that you can customize every aspect of. From unique animations and interactions to web app-like features, you have the opportunity to make your photography portfolio site stand out from the rest.
So, we put together a few photography portfolio websites that you can use yourself — whether you want to keep them the way they are or completely customize them to your liking.
Here are 12 photography portfolio templates you can use with Webflow to create your own personal platform for showing off your work.
Subscribe to our newsletter to receive our latest blogs, recommended digital courses, and more to unlock growth Mindset
We kicked off a discussion in the last lesson - Objective function - AI is nothing but an optimization problem. We discussed the definition of objective function in machine learning. Essentially, it is a scorecard that is required for a Machine Learning Model to know how well it is doing at a task. We also briefly discussed how important it is to align the Business Objective with the Machine Learning Objective function but quite often it does not happen, why so?
When companies kick off a machine learning project, especially during today’s times when there is a gold rush to establish oneself as an AI company, the tech and product teams start with AI in mind rather than starting from where it should have started - A user or a business problem.
In this lesson, we will learn to do it the right way and we will also introduce the concept of PRD - Product Requirement Document, a wildly misused or unused tool that is needed to align people on a common mission.
Even though this stage is tagged as Business understanding, it can very well be called User / Business understanding. It is captured in a Product Requirement Document (PRD). I will write a detailed article at some other time to walk you through various components of a PRD but for now, let us quickly look at the main components of a PRD. This should be the focal point of the rest of the framework.
Step 5.1. Breaking down the User/Business problem into sub-problems to answer is it solvable.
In the previous section, we identified the problem statement (User or Business problem) that needs to be solved. However, time and again you and your team will pick up problem statements that may seem unsolvable. While a situation like this is not specifically tied to AI/ML, this problem becomes more daunting in the context of AI/ML. Why so? It is because:
When faced with this situation, it helps a lot to apply the power of the first principle and break down problem statements into subproblems.
The First Principle approach encourages us to break down complex problems into their fundamental truths, questioning assumptions, and re-examining the underlying principles. By stripping away layers of complexity, we gain clarity and insight into the problem at hand. This will eventually help us convert the problem into more management subproblems. This lends itself to tackling each aspect individually, gradually building towards a solution.
Each sub-problem becomes a stepping stone on the path to solving the larger issue. With careful analysis and iteration, we address one sub-problem at a time, leveraging the power of AI and machine learning algorithms where applicable.
An example of an AI/ML problem statement that was solved by employing first principles thinking and breaking down the problem into smaller components is "Autonomous Vehicle Navigation in Complex Urban Environments."
Initially, engineers faced the challenge of creating AI/ML algorithms capable of safely navigating autonomous vehicles through densely populated cities with numerous obstacles, unpredictable traffic patterns, and diverse road conditions. The problem seemed overwhelming, as conventional approaches struggled to address the complexities involved.
However, by applying first principles thinking, engineers broke down the problem into smaller, more manageable components:
By breaking down the autonomous vehicle navigation problem into these smaller components and addressing them individually, engineers were able to overcome the challenges associated with navigating complex urban environments. This approach enabled them to develop robust AI/ML solutions capable of safely and efficiently guiding autonomous vehicles through city streets, contributing to advancements in self-driving technology.
Step 5.2. Sequence the sub-problems & pick the first sub-problem to solve
Once we have broken down the problem statement into smaller sub-problems, we can sequence them in the order of dependency or complexity.
Step 5.3. Re-assess whether the problem is solvable otherwise go back to Step 5.1 and repeat this. If the sub-problems are not solvable, in all likelihood we did not apply the first principle right in Step 5.1.
Step 5.4. Framing Machine learning function by mapping Input to Output given Data D such that Loss Function is satisficed
We know from Module 1 Lesson 3 here that Machine learning is essentially finding a mapping function F(x) so that F(x) maps Input to the desired output.
This step involves:
5.4.1 Understanding the Input (I): For example when we wanted to predict at what price a house would sell for, we needed some input. Do you remember what it was? It was the identifier of the house say House Address. But was that enough? We also needed to provide the features of the house such as SQ Ft, #BR, #BA, Location, etc.
5.4.2 Outlining the Desired Output (O). In this case, it was ‘The price at which the house will sell for in $’.
5.4.3 Understand the required context Data (D): All AI model requires Data, which acts as a context from which the machine learns. In the above case, we would need data of comparables, houses with similar features (Similar SQ Ft, #BR, #BA, Location, etc.), and prices for which they got sold in the market. We will revisit Data in Module 2 Lesson 2. But at the first stage, we should have a pretty good idea of whether we have the context data D required for the machine to learn the function. If we don't have this knowledge then we need to increase our intimacy with in-house data.
5.4.4 Framing the Scenarios S: I → O that maps input to output s.t. we minimize a Loss (Objective) Function L(S): The next part is to put in writing that we need to develop a function such that for every Scenario S there is a mapping between I and O. However, this mapping should be under a mathematical constraint that we are going to measure by Loss Function L(S).We discussed this in Module 2 Lesson 1 here. Let us understand this with two examples:
We will answer this question later in this series as we need to build some more foundational understanding of the Machine learning models.
Step 5.5. Rule-based Model or AI? Evaluate whether ML is the right approach
We already know that we should not be looking to implement the ML model just for the sake of it. Some business problems don’t need ML as simple business rules can do a much better or equally good job. For other business problems, there might not be sufficient data to apply ML as a solution or ML is an overkill when Rule based model can do the job. We discussed this in detail in Module 1 Lesson 3 here.
Step 5.6. ML Objective → Business Objective such that Business Objective is satisficed
Map the technical outcome to a business outcome. Let us revisit the business outcome in the above two examples:
One thing to note is that above we said ‘Satisficing Criteria’. Many decisions in Machine learning problems in a real-world setting will fall into what we call a Satisficing decision rather than a satisfying decision.
A satisficing metric is a measurement or criterion used in decision-making that aims to find an acceptable solution rather than an optimal one. Unlike optimization metrics, which strive to find the best possible outcome, satisficing metrics focus on identifying solutions that meet a predefined threshold of acceptability or sufficiency.
We discuss this at length in the article - Metrics: ‘Satisficing’ Metric — Not all metrics need to be optimized. This acts as a reminder to the product and technology team to not optimize the model unnecessarily if there is a marginal improvement in business metrics.
Step 5.7. Align Data Engineering, Data Science, and Business Executives on 1-5 so that DS/DE leads can take on the next part of the framework
Many AI projects fail not because of a lack of a technical toolkit to solve the problem but because of a lack of alignment between internal stakeholders. The last phase in the stage is a reminder to all leaders and participants to ensure a collaborative approach in the journey. As they
When you have to walk fast go alone, when you have to go far go together. Walking together is always the right answer in the long run so build alignment early and often.
In the next lesson, we will discuss the next phase in our Machine Learning Development Framework - Data. Data is the new currency so let us see what needs to be in place to extract the maximum out of it.