GUEST:
Here’s a question that will keep future Artificial Intelligence (AI) entrepreneurs up at night: How do you manage a product when the software starts writing itself?
We’re not quite there yet, but as we build smarter, more complex software that has elements driven by AI we’re also making less predictable software. We know that AI will bring more capabilities to software, but it will also make software harder to design and manage since it will sometimes behave in unplanned ways. This is just a phenomenon that comes along with making complex systems. And, that’s where we are going with software. This is where complexity theory meets software.
For most of us who have been entrepreneurs, executives, engineers, and product managers in the software industry, we have designed and managed software for decades safely assuming a reasonable level of input-output certainty. Meaning, when we input data, we can easily figure out what the correct output should be. This is because we have been working mostly on simple systems. If you entered A and B into the input, C would come out. If you don’t get C, you know you have a defect that needs to be addressed. With simple systems, you can use the same set of test cases over and over again and expect the same outputs over and over again.
Intelligent agents and other dynamic AI-based systems turn this concept on its head as self-learning software adapts its outputs based on inputs from various interactions with other systems and people all the time. Some systems today have gotten pretty complex (especially in the enterprise), but introducing more AI-based algorithms will accelerate complexity beyond where we’ve been in the past. We’ll have systems that go from being difficult to decipher why they did something to being indecipherable. And, with intelligent agents, we’re massively increasing the number of potential inputs (sometimes, the input could be any combination of words in an entire language), which again increases dramatically the number of potential ways to interpret the input and provide a wider array of outputs.
For example, neural nets provide outputs based on inputs, but in between the input and output is the black box of computation. We won’t know why exactly the outputs were generated from those particular inputs. And, new training (how the algorithm updates its learning) mean that the outputs may change given the same inputs. So, dynamic updates from a continuously learning piece for software means that there will be layers of learning that happen real-time that will impact outputs in a way that won’t be predictable. And, some of these outputs will be fed into other parts of the system, creating additional layers of complexity. We are moving to more complex system design. The term for the new, unexpected things produced by complex systems is called emergence. And, our software will only increase in emergent behaviors as we make them more complex.
This is more of an observation and area of planning than a concern for me. We work with people every day who are unpredictable. No one knows exactly all the reasons people do what they do from moment to moment. Yet, we have found ways to collaborate between humans and get work done. And, for software, we’ll need to think through the issues as we build systems that become more complex. So, based on experience, I’ve created some fundamental tips that can help with the above issues as well as other issues when building AI-driven products and AI-based intelligent agents. Note: depending on what you are building, you may need to ignore or alter some of the tips for yourself, based you your particular goals.
1. Domain focus
Limiting your domain can help limit complexity. So, it’s a good idea to simplify and focus some things that you have control of, like the domain of expertise of your software. Keep your product constrained into a narrow domain (focused on a logical set of jobs to do for the customer and a logical set of knowledge around an expertise, for example) at first and learn before you expand into other domains.
2. Learning feedback loops
Every interaction is a chance to learn. Your systems should learn something from all (or almost all) interactions with humans and other systems. Feedback loops are needed for your software to self-correct and learn, and also gives you information to know how to adjust your product and plan for the future. Within your domain, be cognizant of what to optimize for at a high level, but don’t over-optimize too soon. Although the AI product can be murky as you explore product feedback loops, you need to choose a more general, large set of capabilities at first and then look for problems that you will be solving for the user. As your user uses the product, your product optimizations can be based on actual customer usage over time.
3. Human-in-the-loop
Sometimes, a human brain is needed to augment the system. “Human-in-the-loop” refers to the situation where you can have a human complete some tasks to improve a user experience or to figure something out that is too difficult for the system. Designing this in as part of your system will be useful for doing work or validating parts of a process that the system can’t do well yet. And, the actions that the human took can feed back into the system to train the system to do the task better for itself in the future. Many companies building AI products use a human-in-the-loop to jump in and do some sort of work as part of their back-end.
4. Use all the context you can get
Context adds intelligence. (Or, at least the appearance of intelligence.) We’re collecting more contextual data than ever, and this context information will be needed for better AI-driven systems across a wide spectrum of industries. For many systems that interact with humans, context will be king. The abilities of intelligent agents will be expanded or constrained based on how much contextual data (location, related data, personalized information, etc.) the application can get. To progress, contextual information will have to be collected directly from the user and any other applications that can be accessed.
5. Detect failure (create complex/ emergent failure detection)
Emergent systems require real-time performance evaluation. As we develop systems that operate dynamically, we’ll also need to re-think Q&A. Mainly we need to think about how to augment current Q&A processes. There is more work to be done here, but we will need models for real-time error detection so that we can fail gracefully or have the system jump into another path of action. One way this could be done would be similar to how humans do it – by getting feedback from an independent observer. What I mean is an application that constantly observes the production system and looks for abnormal or inaccurate behavior. Once detected, it would give feedback to the main/ production system in order for it to improve and adjust its actions. Sort of like a real-time performance evaluation, except it would be all digital and in real time. I imagine that this application could look similar to virus or spam detection software, where applications can look for a fuzzy determination of “normal” vs. “abnormal” behavior.
6. Create smart failover reactions
Expect the unexpected. Humans are unpredictable, and combining unpredictable humans with unpredictable machines exacerbates the issue. Plan for smart failover experiences that can ask for clarity or clearly communicate the confusion to the user. Plan ahead so that the user won’t get confused by the dynamic nature of the system.
7. Collect and maintain trainable, quality data
Use interactive systems to collect good data interactively. When designing inputs to the system via any interface, think about how you can check for the quality and trainability of the data you are collecting. If you are designing an intelligent agent, you can ask the user clarifying questions real-time. If not, you can still build techniques to ensure data quality upon input. There may also be old datasets that could be used to get started with a new customer. Quality will be a factor here as well. Old data sets may not be well maintained and may need to be cleaned up.
8. Create an AI flywheel
Data from users can make the system more valuable, which can help obtain more users and data, which can, in turn, make the system even more valuable. With AI-driven products, information can be collected from all the users on the system (and other systems) to make the system smarter, which in turn makes the system more valuable to attract more users. When you attract more users, data can be collected from them that can feed into the software and so on. This creates a flywheel of data collection and an increasingly intelligent system that builds upon itself. This is a way to create unique value over time. And, it makes it difficult for competitors to catch up as the cycle creates its own momentum.
9. Create value for the user ASAP
Give value while collecting data. Balance the collection of data from the user with something useful for the user. The ideal scenario is to provide value while you are learning. Also, if possible, find value in old data that can be loaded into the system through integrations with other systems. It’s good to plan for all the great things you can do with data collected in the future, but you have to have some immediate value so that people stick around.
10. When creating intelligent agents, trust and knowledge must be built via interactions
If you are building an intelligent agent, the onboarding never ends. When it comes to intelligent agents, the initial proactive experiences the users have with the agent combined with the ongoing interactions will drive how the user can and will user the agent long-term. So, smart onboarding (introduction to the agent) and ongoing education of the user is key. Humans develop our familiarity with other people through repeated interactions over time. This is how humans will also interact with intelligent agents. If the user and agent haven’t communicated in a while, then the human may even forget about it altogether. Also, it’s important to think about how the user will discover what the agent can do. The agent may need to send reminders of new skills it has acquired or even simply provide a visual menu of what it can do. The important thing is to think about how all the capabilities will be presented to the user so that the user understands what it can do and that the user remembers the intelligent agent. The proactive nature of these communications will drive the usage and user expectations needed to do the other things on this list.
11. When creating intelligent agents, a hybrid interface is usually the right interface
In the long term, I predict that intelligent agents will communicate better with humans than humans communicate among themselves. That’s because intelligent agents will have a wider variety of communication methods and input options than humans do. The best path for chat-based or other visual user experiences will usually not be to create a totally text-driven experience. An interface that contains both text elements and visual elements (buttons, etc.) is what we call a hybrid interface and will allow a wide array of input and output options that can be used in the right context to most efficiently communicate. Also, it’s at this interface point of interaction with users that number 7 (collect quality data) can be enforced. Artful communication with the user is needed to make sure good information is collected that can make the software smarter.
12. Create performance metrics
Managing a system requires managing metrics. Metrics are always important for business, especially when you start getting a significant data set from larger numbers of users of your product. Success metrics for AI-driven products will all be slightly different, but they will fall into categories. 1) Quality of data collected that can be used for training, 2) Quality of the modeling in order to generate the right output, 3) AI flywheel growth measurement (for some companies), and 4) customer success metrics (for your particular business, including quality of output to users). As systems get more complex, the right metrics will be needed to ensure you are managing your complex system well.
Final note on managing AI-driven products
And, finally, behind many of these thoughts is a common philosophy. We have to start thinking about managing complex computer systems driven by the latest AI-driven capabilities that are capable of emergent behavior. And, that is about managing the parameters, rules, checks, and balances of the system in a way that provides stability for the system. Think about managing an economy. You don’t manage an economy (well) by explicitly saying what the prices of all goods and services are. You manage at the higher system level. You set forward a general set of rules (laws) that make sense for that system, and manage a few system level variables (like the federal funds rate). And, the independent agents (in this case, people) will make self-optimizing decisions to set prices by interacting with each other based on their independent needs and wants. Management of complex software systems is similar and will mean designing for good information collection, setting the right parameters, picking the right success metrics for your software, and turning the right knobs at the system level in order to keep the system in the best state of success that you can manage. Therefore part of AI-driven product management is really complex system design and will need more thinking from the perspective of complex systems.
Will Murphy is the VP of Product and Business Development and a cofounder at Talla, an AI-powered customer service company.