• Start
  • Good or No Good? The Ethical Questions Raised by AI

Unser Lesetipp

Current Articles

Good or No Good? The Ethical Questions Raised by AI

Reading Time: 4 Minutes 16.02.2022 Currents & Trends

What you need to consider regarding the human component of AI

Researchers assume that AI will soon have cognitive skills similar to humans if not better. It's not a question of "whether" but "when". We talked to DI Christof Wolf Brenner of Know-Center GmbH about what to bear in mind when you want to implement AI.

AI already helps to achieve quicker results and gain new insights. The AI's learning process is both autonomous, meaning without permanent supervision, and adaptive. It will find and learn different things based on what we show it.

Even if this sounds quite promising, there is also a downside to this. A prominent example for this is the so-called AMS algorithm (available in German only) that divides jobseekers into three classes: high, medium and low chances of finding permanent employment within the next six months. Critics accuse the system of "cementing" already existing social grievances.

This means that the AI makes decisions based on an ideology that has already been deemed overcome. Therefore, AI touches on ethical questions that could lead to problems and even public criticism if they aren't paid the appropriate attention.

The biggest technological challenges at this point

The topic of AI and ethics is currently reflected in the following three areas in particular.

  1. Bias: Simply put, these are prejudiced assumptions and resentments fed into algorithms. Consequently, the AI system might only suggest men or Caucasians for vacancies where qualifications are equivalent.
  2. Lack of transparency: In some situations, it cannot be determined why algorithms yield a certain output when they receive a specific data input. Researchers are working on the topic "Explainable AI" (available in German only) to get a grip on this issue.
  3. Data protection: It's an asset but also a right that must be protected in the face of AI. Technological solutions like anonymization or regulatory approaches are meant to help here.

Approach and implement AI in practice

How to avoid ethically questionable results

In order to minimize the risk of the AI solution yielding morally unwanted results, you can pursue three simple approaches.

  1. Educate: Raise awareness and understanding of this issue. This also paves the way for a strategic approach to using AI. Otherwise, AI would be implemented blindly and cause a lot of issues that would need to be resolved later.
  2. Take position: The management has to clearly communicate your company's take on ethics and AI as well as the non-regulatory principles it commits itself to.
  3. Establish processes: You require processes for systematically and regularly checking the risk potential of the AI application. This includes the concept, planning, development and use.
How industrial SMEs plan AI solutions

When the foundations for AI and ethics have been laid, it's time for the precise planning. First, the concept has to be aligned with the corporate strategy. The essential question here is how AI fits into your organization and how it can help you achieve your goals.

The planning not only includes technological aspects but also corporate and human aspects. The latter addresses the effects of AI decisions on the users.

The next thing you need is a good roadmap. Our experience shows that companies should ideally start with quick wins wherever they have enough data in sufficient quality from which human users have already been able to draw useful conclusions without much effort.

Last but not least, the implementation requires competent partners that ensure the success of the AI project not only in a technological and organizational sense but also regarding ethical questions.

Conclusion: AI should be treated with caution

Hello public criticism and loss of credibility

If you work with data-driven AI, for example to divide manufactured goods into "OK" and "Scrap", you make a strong claim for generalization. The bottom line here is that you can make an accurate statement about future events based on observed samples. On the one hand, this means that AI solutions can be scaled well. On the other hand, errors and ethically questionable results are also scaled.

Consequently, AI projects have to be planned very well. Embedding AI in the existing organization is a serious and long-term change process that not only includes technological aspects but also challenges underlying corporate beliefs.

Companies that don't approach the topic with the necessary caution run the risk of losing their license to operate due to public criticism and long-term loss of credibility. Customers don't see added value only in functionality anymore but also in ethical aspects and the compliance with social expectations.

You want this really cool thing to download.

Download The Thing!

placeholder_200x200