A robot's hand on a seafoam green background hovering over a blog with the scales of justice on it.

How to create a more ethical AI app

The world has witnessed a huge boom in AI since ChatGPT’s public preview was announced last year. We have seen tons of new startups in this space and huge leaps in the technology in a very short time. Companies all over the world are making major investments in developing their own AI solutions and bringing AI tools into their workflow. 

These solutions offer enhanced productivity, shorter time to output, and lower workload for everyone. But they also present many ethical questions, from how they’re built and how they work to the output they produce. 

If organizations cannot ensure the ethical use or development of these AI tools, it can have far-reaching repercussions, both for the company and for the world around them. 

Why should you care about using AI ethically?

Artificial intelligence is one of those technologies that comes up once or twice in our lifetimes but has the potential to impact a large range of industries. Blockchain tech may impact payments, assets, and ownership and IoT will impact how devices are controlled and connected. But AI has the potential to change content creation, device management, cybersecurity, finances, healthcare, construction, transportation, farming, and almost every industry or field. 

But this also means that the wrong approach or lack of ethical guidelines can have much more widespread impact. 

The development and usage of AI will have strong legal implications

In one of the first-ever cases related to AI, a NY tutoring company paid $365,000 over charges that its AI hiring tool rejected women over 55 and men over 60. We can expect more lawsuits as companies all over the world are implementing AI solutions in their workplaces. 

Governments all over the world are also looking into legislation regulating how AI systems use data. There are many concerns from artists and content creators about how AI companies are scraping copyrighted work for training AI models without any compensation for the creators.

Unrestricted AI use can put companies and organizations at risk of legal troubles in the future. 

Unrestricted AI use can affect stakeholder trust

Even advanced AI models have a certain level of unpredictability to it. Large language models like GPT-4 have a tendency to make up facts and statistics. There have been many instances where self-driving cars just didn’t ‘see’ pedestrians or other vehicles. AI-based classification systems, such as solutions that detect cancer by analyzing X-rays are not 100% accurate. 

When these AI tools are put to use without proper knowledge of their capabilities and drawbacks or without examining the associated risk, it can have disastrous consequences. When a company develops or deploys tools like these without transparent policies or approaches, it will make stakeholders uneasy and the organization will lose their trust. 

Ethical AI solutions build customer trust

For the average customer, AI is still an unknown quantity. They are mostly surprised by what AI can do, but often they don’t know what AI can’t. For instance, YouTube and Instagram are filled with videos of consumers staying away from the wheel when driving a car with autonomous capabilities. The story of the lawyer who used ChatGPT to come up with case laws and ended up presenting fake cases was all over the news recently. 

When companies fail to educate customers about the limits and capabilities of their AI solutions (even if it’s unintentional), or how they’re developed, it can have terrible consequences. 

For instance, 

  1. Calling a level two autonomous driving system that requires drivers to maintain their hands on the steering wheel and eyes on the road ‘Autopilot’ and even ‘Full-Self Driving’. 
  2. Even when experts within the company advised against it. 
  3. Or announcing that a car with level 2 autonomous capabilities has all the hardware for Level 5 self-driving (It may have, but announcing it is going to confuse the general public.) 
  4. Or the very popular CEO of the company taking his hands off the steering wheel during interviews while the vehicle is on ‘Autopilot.’ 
  5. Or faking a video of their vehicle driving by itself by taking it through a predetermined route that was 3D mapped beforehand, 
  6. Or not disclosing that when trying to film the above-mentioned video, drivers took over and even had a car crash during trial runs.  
  7. And beginning the above-mentioned video by saying “The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.”

If consumers find that a company is using their biometric data to train AI solutions, they will immediately lose their trust in the organization. If an AI solution fails and costs consumers (monetarily or otherwise) it may even result in a lawsuit. 

It’s the right thing to do 

Even if none of the above happens, even if there are no legal implications, even if stakeholders aren’t worried about AI, and even if customers don’t notice how your AI systems are affecting them, we have a responsibility to use AI ethically. History is filled with how people who used science and technology without ethical concerns created disastrous consequences. 

From nuclear weapons that continue to destroy people’s lives even now, and the Tuskegee syphilis study, to the recent Oceangate tragedy and the Boeing 737 Max tragedies, we have no shortage of disasters caused by unethical use of science and technology. 

AI is a powerful tool, and without ethical oversight, it can be a very dangerous weapon. 

How to build ethical AI applications

Building or using AI applications ethically takes considerable but necessary work. But with strong foundations of transparency as well as the safety and security of all parties, organizations can build powerful AI applications that serve their users. 

Here’s what your organization can do: 

1. Create a responsible AI usage policy

It may not be feasible to analyze the ethical aspects of every AI project before building it especially if you’re a startup and trying to get a product out of the door. However, by building a responsible AI usage policy, the organization can ensure that every product stays within the guidelines. A policy can fast-forward decision-making without breaking ethical problems. 

When creating the policy, make sure you consider the privacy concerns of the users, the risks associated with the apps you’re building, and that AI systems don’t make decisions that humans should be taking.

LunarLab has developed its own responsible AI usage policy and it can serve as a starting point for organizations to build their own policies. 

2. Communicate the policy to everyone

It’s one thing to build a policy, it’s another thing to implement it. Make sure everyone in the organization is familiar with the policy and how it should direct decision-making. Employees should be trained in the policy guidelines during the onboarding process and it should have a place at the table during every discussion. 

The policy must be examined during every stage of product development and if needed the development strategy must be revised. If AI is a core aspect of your organization, it might be a good idea to set up an AI ethics committee with the authority to make project-related decisions. 

3. Understand the legal landscape regarding AI in your industry

Explore the laws dictating AI use within your industry and location. Ensure that your responsible AI usage policy ensures compliance with these laws.

It’s important to note that even if there are no laws directly regulating AI, laws related to privacy, copyrights, discrimination, and public safety may still apply to your use case. 

4. Build out your data set by eliminating biases

When working with data, the rule is ‘garbage in, garbage out.’ Artificial intelligence and machine learning (ML) are not any different. ML systems require tons of data to work and a biased data set will produce biased outputs. 

Bias can creep in even if the system doesn’t directly consider based on age, gender, race, sexual orientation, or any other demographic. 

For instance, imagine an AI hiring tool trained on a data set from the past 10 years with demographic markers such as age, gender, race, and other aspects removed. However the HR reps who provided this data were biased against a minority group.

Even if the race-related data was removed, the system may find a pattern where people from a certain college were not hired. The resulting AI system will discriminate against people from that college. And since a large portion of minority candidates are still coming from that college, it will discriminate against that minority. 

When building the data set, ensure that it is representative of all the different inputs that the AI system will have to process. 

5. Test test test

Even with rigorous checks to remove bias from the training data, there’s still a very high possibility that the system is biased. For this, the system has to be tested with proper testing protocols.

It has to be tested with all the types of data it may receive when deployed in real-world conditions.

In some cases, AI systems just cannot handle data it wasn’t trained on. For instance, many self-driving solutions have come under criticism for not perceiving people of color. The system wasn’t technically discriminating, it didn’t even perceive which is a lot worse. 

To prevent this, it’s not enough to just test the systems against discrimination, but also to ensure that the outputs from the system are as expected. 

6. Be transparent

Transparency builds trust and reduces the risk associated with artificial intelligence systems. If the test reveals biases or discrimination in your models, share the information on time with your shareholders and users. Be open about the capabilities and limitations of the system so that users know how and when to use it. 

We can see plenty of AI companies or companies that use AI blow up the capabilities of their AI systems or not reveal the limitations. When deployed in high-risk environments, these omissions can even put human lives in danger. 

7. Building continuous testing into your development process

Use the responsible AI usage policy to incorporate continuous testing into the development process. Have detailed checklists and procedures for every application before they’re released. Make sure that the testing process also includes letting the users and shareholders know about the limitations before it is released. Ideally, the testing process should involve an AI ethics committee with the power to pause or change the development route. 

In the case of models that are continuously trained with new data, ensure testing even after it is deployed. 

8. Provide a reporting and feedback mechanism for users

Even with extensive testing, there’s a possibility that AI solutions may show bias or discriminatory behavior. When presented with new data, they may behave in unpredictable ways. 

By providing a reporting and feedback mechanism, your team can improve the model and enhance its capabilities. Even if the output doesn’t affect the safety of the users or show discrimination, a feedback mechanism can help iterate the models for better results.

Want to make your AI app ethical? Let’s work together

LunarLab is committed to making the world a better place. We are a public benefit corporation and we are legally bound to consider the social implications of every project we work on or every decision we make. 

We have worked with multiple companies to develop ethical guidelines for using AI within their organizations and their apps. And we’d love to work with you to make this world a better place.

Reach out to us, and let’s chat.

Related Posts