Applying your company’s values to technology, people, and processes
Every aspect of an organization disrupted by technology represents an opportunity to gain or lose stakeholders’ trust. Leaders are approaching trust not as a compliance or PR issue but as a business-critical goal.
At the same time, headlines regularly chronicle technology-based issues such as security hacks, inappropriate or illegal surveillance, misuse of personal data, the spread of misinformation, algorithmic bias, and lack of transparency. The distrust these incidents breed in stakeholders—whether customers, employees, partners, investors, or regulators—can significantly damage an organization’s reputation.1 Indeed, consumer trust in commercial enterprises is declining, citizens are becoming wary of public institutions, and workers are asking employers to explicitly state their core values.2
In what we recognize as an emerging trend, some companies are approaching trust not as compliance or public relations issue but as a business-critical goal to be pursued—one that can differentiate them in an increasingly complex and overfilled market. As discussed in Deloitte’s 2020 Global Marketing Trends report, brand trust is more important than ever for businesses—and it’s all-encompassing. Customers, regulators, and the media expect brands to be open, honest, and consistent across all aspects of their business, from products and promotions to workforce culture and partner relationships.3
Every aspect of a company that is disrupted by technology represents an opportunity to gain or lose trust with customers, employees, partners, investors, and/or regulators. Leaders who embed organizational values and the principles of ethical technology across their organizations are demonstrating a commitment to “doing good” that can build a long-term foundation of trust with stakeholders. In this light, trust becomes a 360-degree undertaking to help ensure that an organization’s technology, processes, and people are working in concert to maintain that foundation.
As the adage reminds us, trust is hard to gain and easy to lose.
The ethical technology terrain
The term ethical technology refers to an overarching set of values that is not limited to or focused on any one technology, instead of addressing the organization’s approach to its use of technologies as a whole and the ways in which they are deployed to drive business strategy and operations.4 Companies should consider proactively evaluating how they can use technology in ways that are aligned with their fundamental purpose and core values.
Ethical technology policies do not replace general compliance or business ethics, but they should all connect in some way. Just as your approach to cybersecurity hasn’t taken the place of your company’s more general privacy policies, your ethical technology approach should complement your overall approach to ethics and serve as its logical extension in the digital realm. Some companies are expanding the mission of existing ethics, learning, and inclusion to include ethical technology while maintaining separate technology ethics programs. Doing so helps keep technology ethics top of mind across the organization and encourages executives to consider the distinctions between technology-related ethical issues and broader corporate and professional ethics concerns.
The fifth annual study of digital business by MIT Sloan Management Review and Deloitte found that just 35 percent of respondents believe their organization’s leaders spend enough time thinking about and communicating the impact of digital initiatives on society. While respondents from digitally maturing companies are the most likely to say their leaders are doing enough, even then, the percentage barely breaks into a majority, at 57 percent.5
These findings suggest that organizations still have significant room to step into the lead. Those companies that develop an ethical technology mindset—demonstrating a commitment to ethical decision-making and promoting a culture that supports it—have an opportunity to earn the trust of their stakeholders.
In pursuit of trust
In the digital era, trust is a complex issue fraught with myriad existential threats to the enterprise. And while disruptive technologies are often viewed as vehicles for exponential growth, tech alone can’t build long-term trust. For this reason, leading organizations are taking a 360-degree approach to maintain the high level of trust their stakeholders expect.
In technology we trust
Artificial intelligence (AI), machine learning, blockchain, digital reality, and other emerging technologies are integrating into our everyday lives more quickly and deeply than ever. How can businesses create trust with the technologies their customers, partners, and employees are using?
- Encode your company’s values. With technology ingrained in the business and machine learning driving business decisions and actions, an organization’s values should be encoded and measured within its technology solutions. Digital systems can be designed to reduce bias and enable organizations to operate in line with their principles.6 For instance, a city government worked with policy institutes to develop an algorithm toolkit intended to identify ways to minimize unintended harm to constituents by limiting biases in the criminal justice system and other institutions. Safeguards can promote stakeholder welfare by helping prevent users from engaging with technology in unhealthy or irresponsible ways. Examples include a company that imposes time and spending limits on habit-forming games, a content aggregator that prompts users to be skeptical about the veracity of crowdsourced information, and cloud computing providers that automatically issue alerts before customers go over budget. Explainable AI technologies can clarify how AI-driven decisions are made. For instance, to enhance confidence in AI-supported medical diagnoses, health care companies are developing solutions that assign each diagnosis a confidence score that explains the probability and contribution of each patient symptom (vital signs, signals from medical reports, lifestyle traits, etc.) to that diagnosis. Clinical professionals can see why the conclusion was made and make a different one if required.7
- Build a strong data foundation. Without methodically and consistently tracking what data you have, where it lives, and who can access it, you cannot create an environment of trust. A strong data foundation unifies stakeholders around a single vision of data accountability and delivers on secure technology that supports effective data management.8 Leaders should aim to give stakeholders some control over how their data will be used and delete data on demand unless it’s necessary to keep it for legal or regulatory purposes.
- Harden your defenses. Deloitte’s 2019 Future of Cyber Survey9 reveals that executives are increasingly spending significant amounts of time focusing on cyber issues, and rightly so. Cyber defenses represent your commitment to protecting your customers, employees, and business partners from those who do not share their values—or yours. Cyber risk strategy should be built and managed from the ground up, embedded in the business mindset, strategy, and policies, not only within IT. Business leaders can collaborate with IT to create a comprehensive cyber risk strategy—encompassing security, privacy, integrity, and confidentiality—to help build stakeholder trust and drive competitive advantage. This requires considering the organization’s risk tolerance, identifying the most vulnerable gaps as well as the most valuable data and systems, then devising plans for mitigation and recovery.
What’s in a process
A strong foundation for ethical technology and trust will be shaped by the principles of an organization’s leaders and realized in business processes.
- Respect stakeholder privacy. One of technology disruption’s most overarching effects has been to accelerate the collection, analysis, and dissemination of information. Not so long ago, the transactional details of our lives were kept in physical file cabinets, pulled out, and referenced for specific needs. Today, systems routinely collect these details and combine them with our purchase histories, posts on social media, online searches, and even the route we drive to work each day.10 If consumers have reason to believe their data is being used in ways they don’t approve of, reactions can include calls for boycotts, public inquiries, and even severe penalties under strict regulations, such as the European Union’s General Data Protection Regulation and California’s Consumer Privacy Act. Companies should create data privacy policies that build, rather than erode, public trust. A natural first step can be to ensure the data usage aligns with the company’s mission.11 For instance, JD Wetherspoon, a pub company servicing the United Kingdom and Ireland, recently deleted more than 656,000 customer email addresses, since it perceived the emails as an intrusive approach to customer interaction that provides little value.12 This case highlights the importance of not only aligning data collection and usage to a company’s values but, by extension, supporting the company’s trusting relationship with the customer.
- Be transparent. Companies can build trust with stakeholders by proactively and transparently demonstrating good behavior. “Transparency becomes vital and important,” says AI Global executive director Ashley Casovan.13 “Whether or not people are interested in seeing the resources and data behind it doesn’t really matter. Simply knowing that companies have transparent policies provides more confidence that they are doing the right thing.” Transparency extends beyond policies explaining data collection and usage practices. For instance, rather than masquerade as humans, intelligent agents or chatbots should identify themselves as such. Companies should disclose the use of automated decision systems that affect customers14 and should stay focused on the customer when problems occur, providing both speed and quality in response. The fallout from negative incidents need not include customer loss or reputation-damaging headlines.15
- Respect differing cultural norms. An organization’s overall approach to building trust is informed by interests, experiences, and professional standards as well as societal norms and government controls. It can be challenging to serve a global market in which expectations on government surveillance or law enforcement cooperation vary widely. For example, what is expected surveillance in some countries might seem outrageous elsewhere; cooperation with law enforcement is routine in many countries but perhaps unwise in places with rampant corruption or lack of protection for political or religious rights. Some countries have very specific regulations around gaining explicit customer consent to data usage; other municipalities are passing legislation, such as banning facial recognition technology, that can conflict with other rulings. Effective governance of emerging technologies requires all relevant stakeholders—industry, consumers, businesses, governments, academia, and society—to work together. Businesses can play a key role in helping governments as they develop laws and standards that increase the reliability of emerging technologies16—frank, candid discourse about new technologies, for example, could lead to new rules and guidance concerning matters of privacy, transparency, inclusivity, accessibility, inequality, and more.17
Empower the people
Since technology is arguably used by most if not all individuals within an organization, ethical technology and trust is a topic that touches everyone.
- Deploy the power of all. Companies can waste time and money creating something that excludes a customer group or provides a service with undesirable side effects. Perhaps even worse, they may build solutions that undermine customer trust. Often, design dilemmas begin with a homogeneous group of people designing products, processes, or services without thinking through how other groups of people might be affected. Leading companies are changing this dynamic by creating teams and roles that reflect their diverse customer base and bringing in multiple viewpoints from different industries, economic backgrounds, educational experiences, genders, and ethnic backgrounds.18 A 2013 Harvard survey revealed that organizations with leadership teams that have a combination of at least three inherent (ones you are born with) and three acquired (ones you gain through experience) diversity traits out-innovate and outperform the others; these organizations are 45 percent more likely to report growth in market share and 70 percent more likely to report capturing a new market.19
- Teach them to fish. Training technologists to recognize their own biases, and to eliminate bias in the products they create is an important step toward creating a culture that emphasizes trust. But it is only one step. Building awareness of how technology affects stakeholder trust in those not directly involved or responsible for technology and creating associated decision-making frameworks are additional steps organizations should consider. This is especially important in non–digital native organizations, where the ripple effects of day-to-day uses of technology may be less obvious to leaders and teams. Companies should consider what resources may be needed to help their employees recognize ethical dilemmas, evaluate alternatives, and make (and test) ethical technology decisions.20
- Give employees a reason to trust. Much of the anxiety over AI and other advanced technologies stems from the fear of the displacement of labor. From an ethical perspective, this presents business leaders with a challenge: balancing the best interests of the business, the employees, and the wider community and society. It’s a task made more complex by the fact that advanced technology systems are not self-sufficient. While AI can replace some jobs, for example, it creates others that often require specialized skills and training.21 Companies can build trust with employees by advising them on how technology may affect their jobs in the future. This could include retraining workers whose roles may evolve and who will likely work with automated systems.22
360 degrees of opportunity
Companies that don’t consider technology to be their core business may assume that these considerations are largely irrelevant. In truth, no matter the industry or geography, most organizations are increasingly reliant on advanced digital and physical technologies to run their day-to-day operations.
While there is so much emphasis on the challenges disruptive technologies bring and the existential threats to an organization’s reputation when technology isn’t handled correctly—whether through misfeasance or malfeasance—these same disruptive technologies can be used to increase transparency, harden security, boost data privacy, and ultimately bolster an organization’s position of trust.
For example, organizations can pivot personalization algorithms to provide relevant recommendations based on circumstance—for example, offer an umbrella on a rainy day rather than an umbrella after someone buys a raincoat. By focusing on relevance rather than personalization, AI recommendations are likely to seem more helpful than invasive.23
Deloitte surveys have found a positive correlation between organizations that strongly consider the ethics of Industry 4.0 technologies and company growth rates. For instance, in organizations that are witnessing low growth (up to 5 percent), only 27 percent of the respondents indicated that they are strongly considering the ethical ramifications of these technologies. By contrast, 55 percent of the respondents from companies growing at a rate of 10 percent or more are highly concerned about ethical considerations.24
After all, the pursuit of trust is not just a 360-degree challenge. It is also a 360-degree opportunity.
Lessons from the front lines
A healthy foundation for trust
Disruptions in the healthcare industry—including new care delivery models, consumer demand for digital experiences, declining reimbursements, and growing regulatory pressures—are driving many healthcare organizations to use technology to improve efficiency, cut costs, and improve patient care. And there could be an inadvertent benefit: Technology could help health care systems build trust with patients and providers.
Providence St. Joseph Health (PSJH) is leveraging technology to adhere to its mission of improving the health of underprivileged and underserved populations, says B.J. Moore, CIO of PSJH.25 Technology is helping the Catholic not-for-profit health system simplify complex experiences to enhance caregiver and patient interactions, modernize the operating environment and business processes, and innovate with cloud, data analytics, AI, and other technologies to help improve patient care.
In the process, PSJH is building trust. For example, the organization is collaborating with technology partners to standardize cloud platforms and productivity and collaboration tools across its 51 hospitals and 1,085 clinics, a move that will improve provider and patient engagement and enable data-driven clinical and operational decision-making. It also aims to develop the first blockchain-powered integrated provider-payer claims processing system. Such technological breakthroughs can increase trust—but careless deployment and negligence can quickly erode it. That’s why Moore has doubled down on establishing and maintaining a solid technology foundation for innovation and, by extension, trust. “Technology holds so much promise for helping patients at scale,” he says. “But it also has the potential to cause damage at scale.”
For example, data analytics, AI, and machine learning can help researchers and clinicians predict chronic disease risk and arrange early interventions, monitor patient symptoms and receive alerts if interventions are needed, estimate patient costs more accurately, reduce unnecessary care, and allocate personnel and resources more efficiently. When patients understand these benefits, they’re generally willing to share their personal and health information with care providers. But their trust could diminish—or vanish—if weak data security or governance protocols were to result in a data breach or unauthorized use of private health information. This could cause patients to conceal information from care professionals, lose confidence in diagnoses, or ignore treatment recommendations.
A number of industry regulations help ensure patient privacy and safety, and PSJH has another effective governance and oversight mechanism: a council of sponsors, consisting of clergy and laypeople, that holds moral accountability for PSJH’s actions in service of its mission. Sponsors help develop guidelines that ensure adherence to mission and values and advise the organization’s executive leadership and board of trustees on trust-related technology matters, such as the ethical use of data and the impact of technology on employees and caregivers.
“We’re continuously working to raise awareness of technology’s role in improving health,” Moore says. “Educating and communicating with patients, care professionals, regulatory bodies, and other key stakeholders can help prevent potential barriers to rapid experimentation and innovation and allow us—and our patients—to fully experience the benefits of technology.”
Do what’s right: CIBC’s strategic approach to building trust and engagement
CIBC is using technology to understand and anticipate individual client needs with the goal of delivering highly personalized experiences—an initiative they call Clientnomics™. Terry Hickey,26 CIBC’s chief analytics officer, recognized that AI-based algorithms could deliver the client insights required to drive Clientnomics but that to be successful, leaders needed to understand and share with employees how AI will complement and support the work they’re doing, versus replacing their jobs. The bank also needed to maintain clients’ trust by protecting their data and governing its use.
In early 2019, leaders from the bank’s analytics, risk, and corporate strategy teams collaborated to develop an organization-wide AI strategy, which CIBC’s senior executive committee and board of directors approved. At the heart of the strategy are guiding principles that address questions such as: When will we use the technology? When will we not use it? How do we ensure that we have our clients’ permission?
To reinforce employee trust, the strategic plan stated that a primary purpose of AI would be to augment employees’ capabilities to achieve company goals. Leaders agreed to focus on funding AI use cases that support employees in their roles and improve practices that aren’t currently optimized.
With the strategy in place, the next step was to build an AI governance process to ensure that new technology projects comply with the strategy and guiding principles. When a new project is proposed, stakeholders answer a series of questions that help them plan and document what they want to accomplish. These questions cover a broad range of ethical considerations, including project goals, possible inherent biases, and client permissions. Approved project documents are stored in a centralized library that regulators, internal auditors, and other reviewers can reference to explain the thought process behind the algorithm or model.
CIBC has also developed advanced analytic techniques to help govern its use of data—for instance, encoding client data in a way that it cannot be reverse-engineered to identify an individual. The analytics team also devised a way to assign a data veracity score—based on data quality and integrity, possible bias, ambiguity, timeliness, and relevance—to each piece of information that could be used by an algorithm. The algorithmic models are designed to recognize and treat the data veracity appropriately, supporting more reliable, trustworthy, and engaging interactions.
As the analytics team launches Clientnomics, members are focused on developing customized AI-supported client experiences rather than large-scale technology projects. So far, they have accumulated 147 use cases, completing 40 in the first year.
For example, when a client calls CIBC’s contact center, a predictive model dynamically configures the interactive voice response menu based on the client’s recent transactions and offers the most relevant information at the top of the menu. The bank aims to cement client relationships over time with a continuous string of personalized interactions.
“In my previous role,” Hickey says, “I spent a lot of time with organizations around the world. Everyone talked about the benefits and future potential of AI, and some completed proofs-of-concept, but few were able to implement them, especially in banking and finance. By proactively addressing how we will—and will not—use technology, CIBC has embraced the positive benefits it can deliver to employees and clients. All of this in less than a year.”
Trust encoded in Abbott’s DNA
In the health care industry, trust is a primary driver of patient behavior: Trusted organizations have an edge in influencing behaviors that can create more positive health outcomes. For 130-year-old global health care company Abbott, trust is top of mind as it evolves and expands its portfolio of diagnostic products, medical devices, nutritional, and branded generic medicines, says CMO Melissa Brotz.27
With technology-driven products such as sensor-based glucose monitoring systems, smartphone-connected insertable cardiac monitors, and cloud-connected implantable defibrillators and pacemakers, Abbott takes a multifaceted approach to trust, adds CIO Mark Murphy.28 Across the enterprise and its connected technologies, this includes comprehensive data protection policies, employee training programs, an external ecosystem of trust-based partners, and other components.
For example, Abbott is exploring multiple data-enabled opportunities to improve health care, such as a machine learning solution that combines performance data from the company’s diagnostics platforms with global clinical and patient demographic data to help health care providers diagnose heart attacks.29 To safeguard patient data and privacy—a core facet of trust—Abbott has enacted a number of enterprise-wide policies, procedures, and annual employee training and certification programs related to data handling and protection and compliance with national and global regulatory mandates. Leaders have also made significant investments in cybersecurity capabilities and controls embedded into product designs, which is increasingly critical for a company such as Abbott, with products and services that are heavily connected and integrated—often with other products, systems, and apps.
In addition, ensuring patient trust is a responsibility that falls to each of Abbott’s 103,000 employees, from the board of directors and C-suite leadership to researchers, product designers, and engineers. Company leadership, for instance, is involved in data and product security oversight groups and board subcommittees, while employees participate in rigorous education programs on the implications of data privacy, security, and transparency. “Abbott is focused on helping people live better, healthier lives,” Murphy notes. “Often, technology is the enabler to help us do that, but it always starts with the patient. We know that when we build technology, we are doing so on behalf of the person who wears it, accesses it, or lives with it inside their body. And that means we must protect it—securely and responsibly.”
Abbott also relies on a strong external ecosystem to maintain patient trust. Independent third parties and research groups test Abbott’s products and services and assess their vulnerabilities on an ongoing basis. For example, the company is part of the #WeHeartHackers initiative, a collaboration between the medical device and security research communities that seek to improve medical device security. At a recent event, Abbott teamed with university researchers to build a mock immersive hospital that enabled researchers to practice cybersecurity defense techniques.30
Rounding out Abbott’s trust ecosystem are patients and care providers themselves. To learn what concepts such as trust, security, and privacy mean to the different users of its products and services, the company regularly holds focus groups with them and produces educational material to raise awareness of these issues.
Ultimately, Brotz says, data-enabled technologies that help people live better lives are an extension of the lifesaving products and services that patients and their care providers have trusted for 130 years. “Patients place the highest levels of trust in us, and we take it very seriously,” she says. “It’s part of our DNA. Our greatest responsibility is to keep them and their data safe and secure.”
Rebuilding security from the ground up to maintain customer trust
Because a company’s approach to technology directly affects stakeholder trust in its brand, businesses that are leveraging advanced technologies can benefit from considering the technologies’ impact on ecosystem partners, employees, customers, and other key stakeholders. Strong security controls and practices are foundational elements for building and maintaining stakeholder trust. Recognizing the impact of security breaches on customer trust, Google went beyond the expected table stakes by completely redesigning its security model to protect enterprise systems and data.
A decade ago, as Google moved internal applications and resources to the cloud, its security perimeter was constantly expanding and changing, complicating the defense of its network perimeter. At the same time, companies were seeing more sophisticated attacks by nation-state-sponsored hackers, testing the limits of the perimeter-based model of security. Hence, Google decided to completely overhaul its security approach and implement a new security model that turned the existing industry standard on its head, says Sampath Srinivas, Google product management director for information security.31
Google security experts could no longer assume that walling off the network would provide the security required to maintain system integrity and customer trust. They sought to reinvent the company’s existing security architecture since the traditional castle-and-moat model—based on a secure network perimeter with VPN-based employee access to internal applications—was no longer adequate. The goal: is to ensure that employees could use any corporate application from any location on any device as easily as if they were using Gmail and as safely as if they were in a Google office.
Google embraced the zero-trust concept, an innovative security model that eliminates network-based trust, Srinivas says, instead of applying access controls to applications based on user identity and the status of their devices, regardless of their network location.
Google’s zero-trust security strategy treats every single network request as if it came from the internet. It applies context-aware access policies to clues such as user identity, device attributes, session information, IP address, and context of the access request itself, collected in real-time by a device inventory service. A globally distributed reverse proxy server protects the target server, encrypts traffic to protect data in transmission, and acts as a sophisticated rules engine that determines access rights based on the user and device’s context, such as whether it is fully patched. Every access request is subject to authentication, authorization, and encryption. To protect against phishing, the company—working with the industry in the FIDO Alliance standards organization—developed and deployed a new form of cryptographic hardware two-factor authentication called Security Keys.32
Today, Google’s user- and device-centric security workflow allows authorized users to securely work from an untrusted network without the use of a VPN. The user experiences internal applications as if they were directly on the internet. Employees, contractors, and other users can have a seamless user-access experience from any location by simply typing in a web address—a process that dramatically reduces support burden. “To deliver on our goal of maintaining customer privacy and trust, we had to look beyond the status quo solutions, innovate, and take risks,” Srinivas says. “When we broke with tradition and changed the way we thought about our security infrastructure, it allowed us to develop a more effective way to protect data and systems.”