Joanna Koprowicz
How do we create AI solutions that don't just disrupt industries, but actually do good in the world?
Today, I want to dive into a topic that resonates deeply with me “Navigating the ethics of artificial intelligence (AI)”.
How do we create AI solutions that don’t just disrupt industries, but actually do good in the world? Let me guide you through this journey with the help of some remarkable case studies, showcasing both the shining moments and the times when companies learned the hard way. Today, we’ll explore the intricate dance between technological innovation and ethical responsibility—how to build AI that makes the world better, not just more advanced.
The rapid evolution of AI has transformed almost every aspect of our lives—from the way we work to how we communicate, access information, and even make critical decisions. With such immense power comes an equally immense responsibility. We are standing at a crossroads, where we must decide how to harness this technology to serve humanity, rather than exploit it. Navigating these challenges requires understanding both the opportunities and the ethical pitfalls that come with AI. We must ask ourselves not just what AI can do, but also what it should do, and how we can ensure it serves the greater good.
Let’s start with Apple
Apple has consistently held user privacy as one of its core values. It’s more than just a marketing tagline—it’s embedded in the very way they build their AI-driven technology. They’ve got features like on-device data processing and end-to-end encryption for services like iMessage and FaceTime.
By processing much of your data directly on your device, Apple minimizes what’s collected in the first place. This isn’t just about compliance or ticking some regulatory checkbox—it’s a real commitment to user rights.
Apple’s approach to privacy is about embedding ethical thinking into the DNA of their technology. They ask the hard questions at every stage of development: How can we collect less data? How can we protect our users’ privacy by default? And this mindset is paying off. Trust has become their most valuable currency. In a world where privacy breaches make headlines nearly every day, Apple’s stance sets it apart. They’ve raised the bar not only for themselves but for the entire industry. Think of it this way: Apple’s approach to privacy isn’t about being anti-advertising. It’s about making privacy a default, and that decision has created a ripple effect, forcing competitors to rethink their own practices.
And what’s the outcome of this commitment? Trust. Trust isn’t just a nice-to-have—it’s foundational to long-term success. Apple’s commitment to privacy helps build deep relationships with its users, creating loyalty that no marketing budget can buy. This kind of ethical foresight is essential when developing AI solutions. If we fail to put user rights and ethical considerations first, we risk losing the very people our innovations are meant to serve. Trust is fragile, and once broken, it is incredibly difficult to rebuild. Apple’s example teaches us that the best way to foster trust is to prioritize ethics from the very beginning.
Let’s contrast that with Google’s experience with Project Maven
In 2018, Google faced an ethical dilemma when it partnered with the U.S. Department of Defense on Project Maven, an initiative to improve drone strike accuracy using AI.
The intention might have been to help—to make these operations more precise and therefore, theoretically, reduce collateral damage. But not everything that can be done with AI should be done. Google’s own employees stood up and said, “Wait a minute, do we want our work used in this way?” They protested against the ethical implications of using AI for military purposes.
Google listened and after internal protests and public scrutiny, they decided not to renew the contract. They even released a set of AI ethics principles, which guide their future projects. That decision wasn’t easy, and it wasn’t perfect. But it’s a powerful example of a company learning in real-time—adjusting when they realize they may be on the wrong side of their ethical boundaries. Sometimes ethics requires that hard pivot.
This scenario with Google highlights something crucial about AI ethics—listening to the voices of your stakeholders. It’s not just about what leadership wants; it’s about what the people who build the technology believe is right.
Google’s response to Project Maven shows the complexity of ethical decision-making in AI. There will always be gray areas, and the right choice might not always be clear-cut. But being open to criticism, listening to concerns, and being willing to change direction are key components of ethical innovation. It also demonstrates the importance of having clear ethical guidelines and principles in place to help navigate these challenging decisions.
Volkswagen—now there's a cautionary tale
In 2015, Volkswagen admitted to deliberately installing software in 11 million diesel cars that allowed them to cheat emissions tests. This wasn’t an oversight—this was a deliberate deception. The ethical breach here wasn’t subtle—it led to environmental harm and a significant loss of consumer trust. The fallout was enormous: fines, legal action, and a shattered reputation.
Yet, from that place of failure, Volkswagen chose to pivot hard toward electric vehicle technology, including AI-powered systems. They invested heavily in sustainability, trying to rebuild what they had lost. It’s a stark reminder that while AI and technological innovation can push us forward, it can also pull us back if not grounded in honesty.
The scandal harmed their brand deeply, but their response afterward—the shift to electric vehicles—illustrates that even the most tarnished reputation can attempt redemption.
Volkswagen’s story also teaches us about accountability and how important it is to have a culture of ethics that permeates every layer of an organization. AI is only as ethical as the people behind it, and if those people are motivated by profit over principles, then we have a serious problem. AI can either become a tool that serves society or one that deceives it—Volkswagen’s case reminds us of what happens when we lose sight of our responsibilities. It’s about creating a culture where ethics are embedded at every level, from top executives to the engineers building the technology.
Facebook and the Cambridge Analytica scandal
Speaking of privacy, let’s look at Facebook and the Cambridge Analytica scandal. In 2018, we learned that Cambridge Analytica had harvested data from millions of Facebook users without their consent. That data was then used to influence political campaigns. This wasn’t just about a breach—it was about manipulation. It was about eroding the trust that millions of users had placed in the platform. This unethical use of personal data led to regulatory investigations across the globe, hefty fines, and a wave of public backlash. Facebook responded by implementing stricter data policies and increasing transparency. But here’s the thing—reactive measures are never as impactful as proactive ones.
Facebook’s journey reminds us that the ethical use of AI and data cannot be an afterthought; it must be baked into the DNA of innovation from the start. Ethical foresight is critical in ensuring that data is handled in a way that respects user autonomy and privacy. When we think about the role of AI in our lives, it’s critical to remember that we are dealing with human data—data that reflects our preferences, behaviors, and even our vulnerabilities. With AI, there is a fine line between creating value and creating harm. Facebook’s reactive stance illustrates the dangers of building first and thinking later. In the world of AI, ethical foresight is the only way forward.
Patagonia - a company that's setting a high standard
Patagonia is a company that deeply values sustainability and ethical labor practices. They don’t just make outdoor gear—they make a statement with every product. They use recycled materials, ensure fair labor conditions, and donate a portion of their profits to environmental causes.
Their commitment to an ethical supply chain is authentic, and it’s cultivated a fiercely loyal customer base that believes in those values. For Patagonia, ethics is not a side note. It’s central to who they are and what they do. It’s a great example of how ethical commitments can align with a company’s core mission, proving that doing good and doing business can coexist. They’re setting a benchmark for what corporate social responsibility looks like, beyond the marketing buzzwords. Patagonia’s approach demonstrates that profitability and sustainability are not mutually exclusive. In fact, their ethical approach has strengthened their brand and led to a deep sense of trust and loyalty among their customers. Patagonia’s commitment to sustainability is particularly relevant when we consider the environmental impact of AI. Training large AI models requires significant energy, and companies must consider their carbon footprints. Patagonia shows us that businesses can be profitable and environmentally conscious. They are proving that you can innovate responsibly without compromising on growth or success. AI companies can learn from this example by integrating environmental considerations into their development processes, ensuring that their technology is both effective and sustainable.
Microsoft and its approach to AI ethics
Let’s shift gears and look at Microsoft and its approach to AI ethics.
Microsoft has proactively developed principles to ensure that AI is developed and deployed responsibly. They have guidelines focused on fairness, reliability, privacy, inclusiveness, transparency, and accountability.
This isn’t just theory—they’ve put these principles into practice through the Aether Committee, which oversees AI projects to ensure they’re ethical. Why is this important? Because the conversation about AI isn’t just about what it can do—it’s about what it should do. Ethical innovation in AI is about ensuring that technology works for everyone, that it’s inclusive and does not reinforce biases or create harm. Microsoft’s proactive approach here is setting a strong example for how to manage emerging technologies. They understand that without ethical oversight, AI can unintentionally perpetuate inequalities and create more harm than good. The Aether Committee is not just about oversight; it’s about a cultural commitment to ethical standards. AI is capable of amazing feats, but without human oversight and ethical guidelines, it can easily perpetuate systemic biases and inequities. Microsoft’s approach underscores the importance of embedding ethics at every stage—from ideation to deployment. It’s not just about compliance; it’s about making ethical consideration an integral part of the creative process. Ethical AI development isn’t just a matter of good governance—it’s a competitive advantage. By prioritizing fairness, transparency, and inclusivity, Microsoft is not only safeguarding its users but also setting itself apart in a crowded market.
Johnson & Johnson’s response to the Tylenol crisis
Now, consider Johnson & Johnson’s response to the Tylenol crisis in 1982.
When several people died from cyanide-laced Tylenol capsules, Johnson & Johnson quickly recalled 31 million bottles—a massive financial loss but the ethically right move.
They prioritized consumer safety above all else. Their swift and transparent response not only saved lives but also rebuilt trust. Johnson & Johnson set new standards for product safety and crisis management, including the introduction of tamper-evident packaging. Their story is a profound example of how taking responsibility, even at great cost, can ultimately strengthen a brand’s relationship with its consumers. In the world of AI, transparency and accountability are key. Mistakes will happen—no system is infallible. But how a company responds to those mistakes will determine whether they gain or lose public trust. Johnson & Johnson’s example teaches us that owning up to errors and taking immediate corrective actions are essential elements of ethical practice. In AI, this means being open about the limitations of our technology, being willing to admit when things go wrong, and acting swiftly to correct them. Transparency is the cornerstone of trust, and without it, any technological innovation is likely to face skepticism and resistance.
Facial Recognition Technology
IBM, in 2020, made a bold move regarding facial recognition technology. They announced they would no longer offer general-purpose facial recognition software.
Their concern was that this technology could be misused for mass surveillance and racial profiling. By stepping away, IBM initiated industry-wide discussions about the ethical use of AI-driven facial recognition. Sometimes, choosing not to innovate in a certain direction is the most ethical decision a company can make.
Facial recognition is a powerful tool, and with that power comes enormous ethical responsibility. IBM’s decision to withdraw from the market highlights an important point: not every innovation is worth pursuing. AI has immense potential for good, but it can also be weaponized in ways that harm individuals and communities. IBM’s choice to prioritize ethics over market share shows that ethical restraint is just as important as technical advancement. This move also reminds us that sometimes the greatest innovation is not creating new technologies but rather deciding where and how not to use them in ways that could harm society.
Nestlé’s journey with responsible marketing
In the 1970s, Nestlé faced criticism for aggressively marketing infant formula in developing countries, which led to health issues among infants.
In response to global backlash, they adopted the WHO Code of Marketing of Breast-milk Substitutes. It was a hard lesson learned, but it led to more ethical marketing practices. Nestlé’s story reminds us that ethics must include considering the broader impact of our actions on vulnerable communities.
For AI, this means considering not just the direct users of a technology but also the broader ecosystem it affects. The unintended consequences of AI systems—whether it’s biases in machine learning algorithms or the societal impacts of automation—must be considered at every step. Nestlé’s pivot toward more ethical marketing practices teaches us that ethical responsibility is not just about fixing problems; it’s about preventing them from happening in the first place. AI developers must consider the ripple effects of their technologies on society, ensuring that innovation does not come at the expense of the most vulnerable.
Airbnb - challenges with discrimination
Airbnb, too, faced challenges with discrimination on its platform.
Reports emerged of hosts discriminating against guests based on race and other factors. Airbnb recognized this issue and responded with policies to combat discrimination, including anonymizing booking requests and offering diversity training to hosts. They’re not perfect, but they’re trying to create a more inclusive community—showing that an ethical response is about constantly improving and listening.
The case of Airbnb speaks to the social impact of AI-driven platforms. Algorithms are not inherently biased, but they can learn and perpetuate the biases present in the data they are trained on. Airbnb’s efforts to address discrimination demonstrate the importance of monitoring and refining AI systems. It’s not enough to build a platform and let it run; ethical AI requires continuous evaluation and improvement. The work is never done, and that’s the point—ethics is a journey, not a destination. It’s about making sure that AI evolves in ways that reflect our highest values rather than our worst impulses.
Add Your Heading Text Here
Tesla is another company that’s faced its share of ethical scrutiny—particularly around their Autopilot feature. Autonomous driving technology is exciting, but with great innovation comes significant responsibility. Tesla has faced criticism, especially after accidents involving their Autopilot system. In response, they’ve worked to increase transparency about what Autopilot can and cannot do, emphasizing its limitations to consumers and rolling out safety updates. This highlights a core element of ethical innovation: responsibility doesn’t end at product release. It’s an ongoing commitment to ensure safety, transparency, and consumer understanding.
Tesla’s experience shows us that transparency is key—not just about what the technology can do, but also what it can’t. When we create AI systems that interact with human lives, there is no room for ambiguity. The stakes are too high, and ethical responsibility means being clear, honest, and proactive. It’s about continuously assessing the impact of these technologies and updating them to ensure they align with safety standards and ethical expectations. Tesla’s journey reminds us that innovation and ethics must move forward together—without one, the other is incomplete.
AI Ethics Board
Finally, let’s revisit Google’s struggle to institutionalize ethical oversight with their AI Ethics Board.
They set up the Google AI Ethics Board to oversee ethical issues in AI but dissolved it shortly after due to internal challenges. This outcome underscores just how complex institutionalizing ethics can be. It’s not enough to have principles; there must be a clear, transparent, and inclusive governance structure.
Google’s experience here is a reminder that ethics isn’t a one-off decision—it’s a continuous process, one that requires diverse perspectives and a commitment to real oversight.
Institutionalizing ethics within a company is challenging, but it’s necessary. AI ethics cannot be an afterthought—it must be ingrained in the company’s culture. Google’s attempt, though ultimately unsuccessful, shows us that we must continue trying. We need diverse voices, robust frameworks, and a willingness to evolve. Ethical governance is hard, but it’s the backbone of responsible AI development. We must be willing to engage in difficult conversations, face scrutiny, and make adjustments when necessary. Building an ethical AI governance framework is not just about protecting users; it’s about building systems that reflect our shared values and contribute to a better society.
So, what do we learn from these stories?
There are three categories of ethical actions we can take:
Proactive Ethics Implementation: Companies like Apple and Patagonia show us what it means to integrate ethics into their core values, influencing every aspect of their AI development and business operations. They take a proactive stance, ensuring that ethical considerations are built in from the start, rather than addressed after the fact. Proactive ethics means asking the hard questions before a crisis arises and making choices that prioritize people over profit.
Responsive Ethical Actions: Organizations like Johnson & Johnson and Microsoft took decisive, ethically driven actions in response to dilemmas. These actions set industry standards and highlighted the importance of responding with integrity, even under pressure. Their responses are lessons in accountability and transparency. Ethical responses are about more than damage control—they are about taking meaningful actions that address the root of the issue and prevent recurrence.
Learning from Mistakes: Companies such as Volkswagen and Facebook illustrate the consequences of ethical oversights and the importance of rebuilding trust through corrective measures and genuine change. Mistakes are inevitable, but what matters is how we learn from them and ensure they do not happen again. Ethical innovation is about embracing failure as a learning opportunity and committing to doing better in the future.
AI is powerful—it changes lives, creates opportunities, and drives progress. But for that innovation to truly be a force for good, it must be anchored in ethics. It’s not about choosing between innovation and doing the right thing—it’s about ensuring that the two walk hand in hand. Ethical innovation requires courage—the courage to listen, the courage to admit mistakes, and the courage to put people before profit.
We have to remember that AI is not just a tool—it’s a reflection of the people who create it, the values they hold, and the future they envision. Our ethical decisions today will shape the AI of tomorrow. AI has the power to amplify our best qualities, but only if we are deliberate about guiding its development. We must approach AI with a sense of responsibility and a commitment to humanity’s collective well-being. So, as we leave today, I challenge all of you—whether you’re innovators, entrepreneurs, or leaders—to ask yourselves: “How can we ensure our AI innovations do more than just work? How can they do good?”
The answer lies in being deliberate, being courageous, and above all, being human – centred.