Development Process for AI Apllications

AI Engineering Development Process

By Published On: September 16th, 2021Categories: AI, machine learning, Research and Development, software engineering

Motivation for AI Engineering Development Process

AI applications often involve not only classical application engineering but also elements of research (Fig. 1). Sometimes it is not clear from the start which approach will be better and one needs to conduct experiments to evaluate multiple approaches. For example if we are building a machine learning model we would need to evaluate and experiment with different features until we find an optimal feature set. Further if we are building machine learning models usually debugging is not an easy task. Also in many cases it is not trivial to evaluate the performance of statistical models and how this performance will translate to business value.

All these factors can add an additional layer of complexity the engineering teams need to cope with.

Fig.1: AI systems are at the intersection between Application Engineering and Research.

 

Development Process for Engineering AI Systems

One way to manage the inherent complexity in AI projects could be to introduce a clear methodology in the development process. An easy to understand process usually helps members of the engineering team and also business stakeholders to be on the same page where the projects is at the moment and also what the next steps are. A well defined process should help with the construction of a technical road-map and also help in raising the transparency in general.

Since AI projects can be seen as located at the intersection between application development and scientific research, we argue that these projects can benefit from some practices commonly used in research and academia. In this post we will present an AI Engineering Cycle. The cycle is motivated by the work presented in the following paper [1] and it is very much related to the idea of Falsifiable Hypothesis introduced by Karl Popper.

The AI Engineering Cycle can be seen in Fig. 2. Some steps probably could be seen also as one logical step, however we have split them here for more clarity. Further the scope of this process is related to experimentation. It is not capturing all activities – e.g. gathering of test data is not depicted, however still needed.

Fig.2: AI Engineering Cycle for constructing AI systems.

In my experience one of the key points to have clarity in AI projects is to always start an experiment with a clear falsifiable hypothesis, i.e. hypotheses which can be proven to be wrong. If the engineering team does not have a clear hypothesis in mind to start with, there is the risk that the team looses itself in experimentation – a problem which can be compared with chasing one’s tail.

After starting with a clear falsifiable hypothesis and going through the experimentation cycle one can evaluate the results and prove or disprove the hypothesis. With the new insights the engineering team can then create a new hypothesis and start the next experimentation cycle.

Example

For example, if we want to evaluate if a new feature will improve the performance of a machine learning model one run through the AI Engineering cycle could look as:

  1. Falsifiable Hypothesis: The new feature X will not improve the performance of the model with the data set T.
  2. Design an Experiment: in this case the design of the experiment is quite simple. We just need to add the new feature in the training process, train the model and evaluate with the test data set.
  3. Implement the Experiment: add the new feature in the Training/Validation/Test data set and in the training process.
  4. Run the experiment: run the training to produce a model.
  5. Evaluate results: evaluate the model, one needs to evaluate with the same evaluation metrics as the baseline model in order to have comparability of course.
  6. Draw conclusions about the hypothesis: if the evaluation metrics indicate a better performance we have disproved the negative hypothesis. The feature does improve the model with the given data set T. As next experiment we could for example test if the feature is still beneficial with a larger data set.

 

Conclusion

This short article proposes a simple development process for engineering AI systems. The presented process is hopefully generic enough to make sense for different applications.

References:

[1] Sanders P. (2009) Algorithm Engineering – An Attempt at a Definition. In: Albers S., Alt H., Näher S. (eds) Efficient Algorithms. Lecture Notes in Computer Science, vol 5760. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03456-5_22

[2] The Logic of Scientific Discovery, Karl Popper

Do you have questions about this article or just want to discuss the topic? Do not hesitate to contact us at

— — —

We put a lot of effort in the content creation in our blog. Multiple information sources are used, we do our own analysis and always double check what we have written down. However, it is still possible that factual or other mistakes occur. If you choose to use what is written on our blog in your own business or personal activities, you do so at your own risk. Be aware that Perelik Soft Ltd. is not liable for any direct or indirect damages you may suffer regarding the use of the content of our blog.

Author: Luben Alexandrov

Luben is the main consultant for software engineering and software architecture questions at Version Lambda. Always looking for ways to improve the efficiency of the software teams.

Share this story

You might be interested

  • Agile is not an excuse for lack of longer term planning

    Sometimes I work with teams where I see that there is no clear mid-term to longer term planning [...]

  • Which DevOps patterns to implement in your team / company?

    Many software development teams and software-producing companies are looking into adopting DevOps patterns and techniques in their [...]

  • What is actually DevOps?

    DevOps is a broadly used IT term, but what does it actually mean? From my conversations with [...]