An AI proof-of-concept (POC) can help you measure business benefits without having to invest in a full-blown implementation. Read below for a step-by-step approach to implementing a POC in your organization.
In the same way you test-drive a car before you buy it, experiencing something first-hand can answer many questions that a description can’t. This experience is even more important in cases where there is some degree of uncertainty as to whether the solution will work at all. An AI proof-of-concept will tell you whether a solution can provide the value you expect and if so, how much benefit it would bring. This way, you can determine whether the benefits are worth the investment.
When considering using AI, there are two key questions that emerge:
- Is what I am trying to achieve in fact feasible?
- And if so, is it worth it?
Implementing a solid POC will help answer both these questions. Here’s how to do it.
Start with a Clear Objective
What is the purpose of implementing AI in the first place? The objective of the POC needs to be clear and crisp. A tightly-defined objective will make it possible to answer the two questions. Here’s an example of a well-defined objective: “The machine learning model developed during the POC must accurately categorize 75% of the records currently being categorized manually by staff”.
Defining a quantified outcome like the 75% above, creates the conditions for an unequivocal answer. Anything above 75% means that the project is feasible. Anything below, means going back to the drawing board. More on this below.
If you have a hard time articulating the objective, you may be one step ahead in your process. Read Integrating AI into your Product to get a better sense of what AI can do for your organization.
Communicate the Objective to the Team
Before any code is written, it is crucial that everyone involved in the project understands the objective. In AI and machine learning projects, a successful outcome is not always clear-cut. For example, the lead machine learning engineer working on the project could be trying to minimize false positives at all costs, which could be thought as a legitimate sign of a good outcome. However this goal can introduce a tradeoff that ultimately makes the result less desirable in the context of the project objective.
Communicating the objective clearly and ensuring that everyone involved understands it can help you save time and effort (and cost!) spent pursuing the wrong goals.
Do the Work
With everyone focused on the objective, it is time to develop the proof-of-concept. The POC should be developed in isolation from the actual live production systems to prevent interfering with “live” business operations. Use historical data that has been verified and properly classified by staff who understand the domain area (e.g. subject matter experts). Ensuring that you have good data is key to training high-quality machine learning models. Bad data, like misclassified records, will lead you down a path of wrong assumptions and ultimately an underperforming model that will hurt more than it will benefit.
Align Decisions with the Objective
Like with any other software development projects, you will inevitably face many decision points along the way. How much data should you use to train your model, which features should you feed into the training and which algorithms should you leverage, are some of these decisions. You will also encounter unexpected challenges for which more than one solution could be employed. Whatever the decision, always test it against the project objective. This way, the decisions are anchored on the expected business outcomes and not purely on technology.
Experiment. Adjust. Repeat.
Depending on the nature of the work, there may be more than one approach that could be followed. For example, if the objective is to classify Tweets, you may want to try a modified and an unmodified natural language processing algorithm and compare the results. Or you may want to deploy totally different approaches and test which one gets you closer to the objective.
When testing out multiple solutions, consider that you may need to adjust one, some or all of the implementations as you evaluate the results. This is an important consideration that will help you balance the cost-benefit tradeoff. Implementing multiple approaches require additional engineering time and potentially more hardware, while the potential benefits provided may or may not justify the extra effort.
Measure the Results
Following experimentation, review the results against the project objective. Reviewing the results should answer the two questions you set out to validate. You will know if the project is feasible and you will know how much time it took (and how much you spent) to get to the optimal results.
At this point, the business may choose to:
- pursue the operationalization of the POC (meaning bringing the AI proof-of-concept into the production environment and “live” business processes);
- try to optimize the results further; or
- abandon the POC.
Operationalizing the POC entails “plugging in” a production version of the POC with other systems. This could range from writing APIs for the systems to connect to the new component, embedding the POC into a larger pipeline, or some other approach to bringing the POC “online”. Of course, there will be a cost associated with these efforts, and these costs should be able to be quantified based on the metrics observed during the implementation of the POC.
If the decision is to optimize further, the work is iterated until a new set of results is ready for review.
The Path Forward
Share this article with your engineering team and discuss with them the right approach for your organization. You should also check out other posts talking about AI proofs-of-concept, like this one here.
And of course feel free to reach out for a free, zero-commitment consultation. We are here to discuss your needs and provide you with advice specifically relevant to your situation.
The photo for this post is courtesy of Tyler Anderson.