At Gradient Ascent, we often work with software companies to build new features or products powered by Machine Learning (ML) or Artificial Intelligence (AI).
Through our work, it has become evident that developing ML/AI software is different from developing “traditional” software. Recognizing these differences can reduce risks associated with the development of this new generation of software. This is a new, rapidly evolving, and broad area.
In this article we will discuss three key differences and how they impact software development.
#1. AI development is data driven.
The objective of most ML/AI development is to produce a “model” that generates the output one wants from the provided inputs. For example, Siri’s model takes our speech and turns it into text, and then into an action.
Majority of ML/AI models are built by using data to train the model. Which is why, it’s no exaggeration to say that data is becoming the defining competitive advantage of the 21st century. With ML/AI development, the training data is effectively the software.
Data is the Software.
This makes it critical for teams to build competency, tooling, processes, and culture with the goal of building a data-driven organization. Even if a company isn’t applying ML/AI currently, it is critical to get comfortable with collecting, storing, and processing data at scale.
Working with data at scale and in a real-time environment involves novel design trade-offs and requires different architectures. But, this is about more than just “technical” or “engineering” skills. Product managers and designers need to consider how to collect data, design “model feedback loops”, and make rigorous decisions driven by data. Quality assurance needs tools, datasets, and knowledge (for example, statistics) to be able to test models. Serious cross-functional attention needs to be paid to growing regulatory requirements and customer expectations around privacy and security.
This critical need to gather and leverage data also has strategic implications. For example, features that help data collection may need to be prioritized as they enable future innovation. Similarly, “beta” versions of models may need to be deployed to accelerate learning.
Companies that succeed won’t have just a specialized and siloed “data/AI” team but an organizational culture that understands, values, and uses data to build better products that customers love.
#2. AI development is experimental.
While traditional software development can be experimental (rarely), ML/AI model development is inherently experimental and iterative. Trial-and-error is central to any model development, even if the initiative doesn’t require cutting edge research. For example, if I were building a model to predict the risk of a loan default, I’d experiment with a variety of input data and algorithms prior to arriving at a “production-worthy” model. It’s important to note that often, no matter how hard one tries, the model just won’t produce the expected results. In ML/AI development, experiment failure is a routine part of the process.
This experimental nature means the development process is never linear, often unpredictable, and prone to failure. However, traditional software development (especially Scrum and Agile methodologies) are often “optimized” for velocity and predictability.
This divergence has a broad impact within the software development lifecycle. Particularly, extra care is needed in managing roadmaps, timelines, and expectations. Iterative and incremental nature of progress requires more effective communication within internal teams and externally to customers. Broader team members also need to be able to understand this new development cycle so that they can respond to it and provide the necessary feedback.
It also means, talented AI developers have a different mindset from traditional developers. Curiosity, hypothesis-testing mindset, patience, and tolerance for failure become very important skills in this new experimental way of working.
#3. AI development is probabilistic.
Behind all the hype and fear, Data Science, ML, and AI are fundamentally just math. It’s about leveraging computing power, data, and algorithms based-on various branches of math, such as linear algebra, statistics, calculus, etc. The answers generated by most of these systems are numbers, specifically, probabilities. Moreover, the answers in aggregate are never 100% accurate.
However, traditional software development is algorithm- or rule-based. All responses are predictable, repeatable, and unless designed to be probabilistic, never respond with probabilities.
This is a stark contrast in underlying philosophies of how software works.
Most of us are not comfortable with probabilities. It will require experienced designers to convert the output probabilities of these models and turn them into delightful, transparent, trustworthy, insightful, and actionable user experiences.
Similarly, most of us expect software to work 100% of times and provide accurate results. With ML/AI, that’s almost impossible as any interaction with Siri or Alexa would indicate. Traditionally, we would call these “bugs” but with AI, this is “working as designed.”
Convincing customers and end-users to use these probabilistic and less-than-perfect systems will be a critical challenge for most organizations. Proper exception handling, effective user interface, and rule-based “guard rails” will be needed for most enterprise-grade software. Marketing and sales teams need to effectively manage expectations. Whereas support agents needs to be able to effectively educate and help customers.
There is a desire to treat these systems as “black boxes” but to build trust with customers will require greater awareness, transparency, and education.
Not all employees will need to be an expert but enough will need to be comfortable with the core concepts (including, math) to be able to design, build, evaluate, explain, and support these systems.
Key to success: Pragmatism.
ML/AI development doesn’t have to be complex or difficult. Recognizing these differences and embracing the change they bring is a key success factor. Forcing the “old” ways of doing things may not work.
Approach your ML/AI development as a learning exercise. This area is rapidly evolving. In some ways, best practices are still yet to come and not a lot of people have practical experience with this. I will share our “lessons learned” and internal best practices in a future article.
I welcome your questions, feedback, and experiences in the comments below.
I would particularly love to hear from you on: How do you approach these issues within your own projects and organizations? What challenges have you faced? What best practices do you recommend? What other significant differences do you see between traditional software vs ML/AI development?