When businesses get started on AI initiatives, their focus shifts from technology to strategy, governance structure and ethical practices. In a recent webinar, Cognizant experts discussed the essential components of a real and responsible AI deployment, a discussion that resulted in these eight top tips.
- Think of data science not as an isolated practice but as part of an AI value chain. The purpose of data science is to glean insights from terabytes of data. But in and of itself, data science is not the endgame. Instead, it’s essential to fit data science into the AI value chain. The first stage is putting the data into a human-centric perspective: What is the human context for what the data is telling us? The second is using machine learning to discover and predict future patterns: What does the data reveals about future behaviors and trends? Lastly, through the AI initiative, the final stage is using this information to begin making AI-driven decisions.
- It’s not “that” you’re doing AI but “why” you’re doing AI. Many businesses are understandably anxious about where to start with AI. But the starting point has less to do with AI and more to do with the business itself. The right place to start, then, is understanding your business pain points and embracing the idea that AI is the best tool for resolving them. For example, most companies, other than the FANGs, would still exist without AI; however, they might need to improve the customer experience or increase claims processing efficiency, and the most effective way to do that is with AI.
- Leaving ethics out of AI is as bad as leaving ethics out of business. Two years ago, the big issue with AI was selecting a platform. More recently, a good deal of attention has been paid to training AI data and data curation. Now, the spotlight is shifting to AI governance and ethics. More businesses today understand that just as they have a chief ethicist responsible for the actions and outcomes of decisions made with human intelligence, they need to do the same when they turn over those decisions to AI.
- Businesses need to control for AI bias. On a closely related note, businesses need to ensure that they understand and can control for bias when constructing and continuing to run their AI systems. There are actually three types of bias to understand: bias that stems from the data used to train the system (if the data is biased, the outcomes will be biased); bias on the part of the employees creating the AI systems; and bias propagated by self-learning AI systems as they, themselves, create new AI systems. Businesses are responsible for controlling the behavior of their AI systems and ensuring they achieve desired and ethical outcomes. Sometimes, this means deciding to avoid entire business opportunities, such as when Google recently discontinued an AI project with the Pentagon. Human judgment is integral to guiding such ethical decisions on whether AI initiatives are consistent with the business’s values and customer priorities.
- It’s never too early to think about governance. Most organizations today recognize that AI isn’t “just another technology implementation” to be executed by the CIO office – it requires the entire organization to be involved. As a result, governance concerns, along with privacy and ethics, are being addressed as soon as AI experimentation begins. This is particularly true in highly regulated industries like financial services and healthcare. With governance mechanisms in place, businesses can be ready to shut down an AI initiative that’s showing unwanted results, such as what Amazon did when its AI recruiting system showed bias against women.
- Plan ahead for harmful human behaviors. We often hear about AI causing job loss; less discussed is humans’ potential for sabotaging AI systems. It wouldn’t take much, for example, for thieves to steal the tires from a self-driving pizza delivery truck – the autonomous vehicle would likely not be trained to defend itself by hitting the gas or shifting into reverse as a human might. Or a customer who might not berate a human customer service agent might think nothing of screaming at a chatbot in order to game the system. When designing AI systems, businesses need to recognize that people will likely behave differently when they know they’re interacting with an AI system instead of a human.
- Fixing AI failures is a team sport. With self-learning AI systems, continuous maintenance is a necessity. And when things aren’t going in the right direction, it’s not the programmers who should be consulted with – businesses need a team of ethicists, sociologists, marketers and others to diagnose the problem and set the system in the right direction. If your child was having problems in school, after all, you wouldn’t call the obstetrician who delivered the baby – you’d get the teachers, social workers and administrators at the school involved. So it is with AI. These systems are built with technology, but their evolution requires a human hand.
- Be open to the unforeseen – and non-AI – side benefits of AI. While you might begin an AI initiative with an interest in the technology, many organizations ultimately realize the positive changes that result from non-tech discoveries. An example is a hospital that used AI to better understand the factors determining health outcomes. Especially among patients in the U.S. Medicaid program, for instance, social inhibitors such as access to food, shelter and transportation were more impactful to health outcomes than weight, blood pressure or body temperature. Using natural language processing (NLP), the hospital was able to extract those hidden nuggets of information to ensure patients were connected with relevant social service organizations. In many cases, however, physicians were unsure of how to help, revealing process gaps that needed to be fixed in order to improve the patient experience. In this way, the AI system introduced the opportunity to improve the ecosystem by improving collaboration among hospital staff.
The Webinar is available On-demand.