Business Case for AI Ethics
What’s the missing ingredient if we want to sustain AI Ethics efforts in the long run?
I have a hard time recalling any instances where the “do the right thing” narrative made a significant headway in convincing the people in charge of capital allocation. This is a statement in general, but also about AI Ethics in particular.
Any push towards doing the ethical and responsible thing that fails to acknowledge the systematic biases towards maximizing profits is just simply wishful thinking. I’m not trying to be contrarian and definitely not trying to say that people who are spending their lives advocating for AI Ethics and such are barking up the wrong tree. But let’s be honest, big corporations pretend to care about ethics and responsibility when it helps their PR and as soon as things get tough, the AI Ethics teams are the first ones that get axed off.
So, what’s the missing ingredient if we want to sustain AI Ethics efforts in the long run?
The Ethical-Economic Paradox
Often, arguments about AI ethics start with examples like biased loan application processing systems. They go on to say “AI might deny loans to people from certain backgrounds due to biased data, and that’s bad”. Yes, it is! However, what you’re failing to mention is that the same AI increases overall processing efficiency, saving the financial institute lots of money, and therefore they have zero incentive to take this kind of outcry seriously. For the CFO of this business, “people from certain backgrounds” you are advocating for are simply statistical errors, in an otherwise well performing, now improved system.
You see the issue?
We create systems that are statistically efficient but cause individual harm, sometimes knowingly, and sometimes without even knowing why, thanks to black-box algorithms. The obvious answer is yes, we have a responsibility to make these systems ethically right. But how do we do that in a way that acknowledges the realities of the world?
You might expect me to say things like "ethics keeps you out of trouble," "it's good for your brand," or "values matter." I find these statements often meaningless because we've been saying them for a long time without seeing substantial outcomes.
"Short-term profit is always at odds with the well-being of the user, society, and environment,"
It even has a name: the "ethical-economic paradox."
Let’s look at this in action:
Startups chasing growth at all costs often deprioritize “soft values” like ethics.
Private equity firms acquire companies and cut anything that doesn’t immediately impact the bottom line.
Social media platforms are built on maximizing attention and outrage - not on protecting mental health.
Big Pharma has minimal incentive to heal when treating symptoms is more profitable.
Food industries thrive on sugar, chemicals, and addiction - because they drive sales.
So when people say, “Let’s build ethical AI,” I ask: in this environment - how, exactly, do you expect that to happen?
This fundamental tension explains why many well-intentioned ethics initiatives collapse when business conditions tighten.
In nearly all the examples above, the push for short-term gains leads to products or practices that cause long-term harm. While regulations attempt to address this, what we’ve seen is that the regulators are often a decade behind where the tech is and by the time they wrap their heads around it, the damage is done. Now imagine where we would be in 5 years at the current rate of development in foundation models and AI agents built around them!
Here's where I think things get interesting: I argue that the only way out of this tension is finding the sweet spot - an "opportunity zone” where ethical behavior, profit-making, and structural realities overlap. I believe the most impactful change will come from builders, founders, and entrepreneurs who build in this “sweet spot”. Maybe these wont be necessarily venture-capitalist friendly, but they could surely make enough money for the founder to live comfortably while also sleeping at peace at night.
Learning from Environmental Innovation
We can learn a lot from how the ESG and clean-tech sectors evolved over the decades.
Many startups tried to build products that reduce waste or promote clean energy. Their intentions were good, but their methods were flawed.
These companies often failed because they appealed to morality. They asked investors to fund them out of principle. They asked consumers to buy their products out of conscience. In the real world, that usually doesn’t work.
A large number of those startups never got off the ground. Many faded away without scale because they never solved a real business problem. They assumed that if people cared enough, things would change. That didn’t happen.
But some did succeed. Let me share two examples that explain how they got it right.
Smart Recycling Bins (MyMatter)
MyMatter created a smart bin that uses computer vision to sort waste automatically. If someone throws a recyclable item into the trash, the bin detects the mistake and moves the item into the correct compartment.
This solves a practical issue. People often don’t know whether something is recyclable, and they don’t want to think about it.
The product removes the decision-making burden from the user.
It is sold to cities and hotels where a lot of garbage mixing happens.
It uses AI to address a problem that would otherwise require behavior change.
This product works because it connects environmental goals with business needs. It reduces waste, saves time, and fits naturally into how people behave.
Kitchen Waste Monitoring (Winnow)
Winnow places a camera above and a scale below trash bins in hotel kitchens. The system identifies what food is thrown away and weighs it. Each day, the hotel receives a report that shows the exact cost of the waste.
For example, “You threw away $140 worth of cucumbers today.”
It also gives practical suggestions. Reuse tomato scraps for sauce. Reduce future orders for the items you often waste.
Staff don’t have to change their process. The system works passively.
Executives gain visibility into financial losses, and guess what, they’re motivated to reduce that.
Behavior shifts naturally through awareness and cost savings.
This is another case of solving a structural issue while aligning with both sustainability and profitability.
These examples succeed because they align doing the right thing (reducing waste) with making money (saving costs) and working around structural biases (making it easy for staff).
This is what ethical product design should do. It should eliminate resistance. It should reduce the friction of doing the right thing.
The Opportunity Zone: Where Ethics Meets Business
The key idea I want to present is identifying what I call the "opportunity zone" – the intersection of three critical elements:
Doing the right thing (ethical imperatives)
Making money (business viability)
Working around structural biases (practical implementation)
This framework shifts our thinking from abstract moral principles to concrete business implication. Instead of just saying "build ethical AI because it's right," we can reframe ethical considerations as business imperatives.
If you genuinely care about ethical AI, you must figure out how to operate in this intersection. The solution to the paradox lies in balancing these competing forces. The environmental examples did precisely this: they linked environmental goals directly to financial outcomes (cost savings) and addressed structural issues (making recycling effortless, automated waste tracking).
Crucially, you cannot go far doing the right thing and solving structural problems without making money. Ethical initiatives require funding. Whether inside a corporation or as a startup, if your ethical effort isn't tied to the bottom line, it risks being cut. No investor funds a project solely because it's ethical; they invest because they expect a return.
While massive companies like Meta or Google face different scaled challenges related to the paradox, for most of us building products, aligning ethics with these business realities is key to creating sustainable positive impact.
The ESG playbook evolved from “reduce waste” to “cut costs and access new asset classes.” AI needs the same shift.
The AI Ethics Playbook
Perhaps, this can be the beginning of a practical checklist for building AI products that succeed ethically and commercially:
Alignment with user preferences isn't just ethical – it drives adoption
Explainability isn't just transparent – it enables sales to the regulated industries and helps users stick around.
Guardrails aren't just responsible – they're necessary for business customers who require predictable systems.
Unbiased data isn't just fair – it expands your addressable market
1. Trust is Paramount
Trust isn't just a moral virtue – it's a business necessity. When ChatGPT first launched, initial hallucinations created excitement but quickly eroded user trust for some applications, as people found it unreliable for serious use. Only after addressing these credibility issues did sustained usage follow in many areas. While established companies might get second chances, most startups won't have that luxury. You have to get it right the first time.
2. Explainability Drives Adoption
If users don't understand how your system makes decisions, they won't stick around – especially in regulated industries. No financial institution or healthcare provider will adopt a black-box system that can't explain its recommendations when challenged. Explainability isn't just about transparency; it's about market access. You often can't sell the product otherwise.
3. Data Quality Determines Market Reach
Biased training data doesn't just create ethical problems – it limits your addressable market. If your product works well for urban users but poorly for rural ones due to skewed data, you're unnecessarily constraining your growth and losing out on a portion of the market. Every demographic your system underserves represents lost revenue and opportunity.
4. Goal-Oriented Design Creates Value
Generative AI systems that produce impressive outputs without helping users achieve concrete goals won't retain users long-term. The crucial question isn't just whether your system can generate compelling text or images, but whether it helps users accomplish meaningful tasks. Value creation drives retention and leads to happy customers.
5. Sustainable Engagement Builds Longevity
While it might be technically possible to create AI experiences that nudge users toward manipulative behavior and maximize short-term engagement, this approach ultimately leads to burnout and abandonment (like my decision to cut out news and social media from my life completely). Aim for sustainable engagement that provides long-term value. Even gaming platforms now often include features encouraging breaks because they understand that sustainable engagement creates more lifetime value than exploitation.
Closing Thoughts
My core message is this: The future of AI ethics isn't about more impassioned moral appeals – it's about demonstrating that ethical AI is better business (at least in some cases). As AI becomes increasingly integrated into critical domains, the companies that succeed won't be those with the most virtuous mission statements, but those that build trustworthy, explainable, and genuinely helpful systems that align ethical considerations with user needs and business objectives.
And maybe that’s the most important insight of all. Sustainable ethics requires sustainable business models. By reframing AI ethics as a business imperative - not just a moral one - we create the conditions for those values to survive and thrive in real markets.