As with any wide-ranging initiative, implementing AI ethics begins with determining the right strategy for success.
In recent years, there’s been a shift in companies, from simply considering the technical implications of their technology to now acknowledging and attempting to advance solutions that ensure their technologies, including artificial intelligence (AI), act responsibly. And, according to the IBM Institute for Business Value AI Ethics survey that surveyed 1,200 executives in 22 countries across 22 industries, nearly 80 percent of CEOs are prepared to take actions to increase AI accountability. This is up from only 20 percent in 2018. Awareness of the importance of AI ethics is also notably extending throughout the boardroom: 80 percent of respondents in this year’s survey pointed to a non-technical executive as the primary “champion” for AI ethics, compared to 15 percent in 2018.
The benefits of AI continue to grow.
AI has transformative potential. In 2021, “AI augmentation,” defined as the “human-centered partnership model of people and AI working together,” created an estimated $2.9 trillion in business value, according to Gartner, and saved an estimated 6.2 billion hours of worker productivity. As investment and adoption continue to dramatically grow, along with the development of no-code or low-code solutions that allow people to customize their own AI without extensive technical knowledge, AI will continue to become more accessible and impactful to the masses. AI is capable of augmenting human capacity in numerous areas, from research and analysis to the completion of basic daily tasks like managing our calendar and finances.
And, we have already seen AI unlock significant breakthroughs: Last year, researchers at IBM, Oxford, Cambridge, and the National Physical Laboratory showed how AI-designed antimicrobial peptides interact with computational models of a cellular membrane, a development that could have wide implications for drug discovery.
Go Deeper: Ethical Technology and Trust
Ensuring AI is trustworthy is a balancing act– but a worthy one.
While the promises of AI are great, so too are the pitfalls if we don’t ensure it is trustworthy, i.e., that it’s fair, explainable, transparent, robust, and respectful of our data and insights. The definition of “untrustworthy AI” may be obvious to most: discriminatory, opaque, misused, and otherwise falling short of general expectations of trust. Yet, advancing trustworthy AI can remain challenging considering the pragmatic balancing act sometimes needed: for example, between “explainability”–the ability to understand the rationale behind an AI algorithm’s results—and “robustness”–an algorithm’s accuracy in arriving at an outcome.
Set AI ethics practices in the proper strategic context.
As with any wide-ranging initiative, implementing AI ethics begins with determining the right strategy for success. Consider the criticality of building trustworthy AI to business strategy and objectives: What are key-value creators that could be accelerated with AI? How will success be measured?
It’s also important to consider the role of AI innovation in an organization’s growth strategy and approach – is an organization a “trailblazer” that constantly pushes the boundary of putting new technology into practice, or a “fast follower” who prefers more tested approaches? The answers to these questions will help identify and codify key AI ethics principles and determine the human + machine balance in an organization.
Establish a governance approach to implement AI ethics.
The next step is for a business to establish its own AI ethics governance framework. This starts with incorporating the full range of perspectives (e.g., business leaders, clients, customers, government officials, and society at large) on topics such as privacy, robustness, fairness, explainability, and transparency. It also means ensuring a diversity of identity and perspective: IBM’s new research shows there are 5.5 times fewer women on the AI teams than in the organization, along with 4 times fewer LGBT+ individuals, and 1.7 times fewer Black, Indigenous, and People of Color (BIPOC).
Integrate ethics into the AI lifecycle.
Finally, ethics is not a “set it and forget it” process. There are a number of additional steps to take once an organization establishes its governance and maintenance system. For one, it must continue engaging with its internal and external stakeholders on the topic, as well as capture, report, and review compliance data. It must also drive and support education and diversity efforts for internal teams, and define integrated methodologies and toolkits that champion the principles of AI ethics.
AI will only find its way more into our everyday lives–it should be advanced responsibly and in a way that ensures ethical principles are at the technology’s core. Thankfully, the playbook for AI ethics is becoming clearer, more practical, and more tangible. But, it’s on all of us– across the industry, government, research and academia, and the whole of society– to champion it.