AI

Published on June 10th, 2020 | by Ben Hartwig

0

AI and Trust: AI Transparency conflicts with privacy right

When most people think of Artificial Intelligence (AI) they think of futuristic robotics making our lives easier. AI is more than that.

It does entail machine-driven intelligence.  However, it is a simulation of human intelligence processes.  In a nutshell, AI processes information and data fed to it by humans.

Therefore AI Artificial Intelligence morphs into issues of business, academics, and ethics. When AI processes and decisions are obscure, AI lacks transparency. Lack of transparency is the main fear surrounding unfettered AI.

In order to stay within ethical codes, AI must give unambiguous system details and avoid confusing wording and contexts. This way, AI becomes “explainable”.

The concept of “reasonable inferences” is a follow up to the right of explanation. The right to explanation gives reasons for why a particular decision was taken.  However, those reasons may not be justifiable.

Transparency also includes prospective and retrospective elements. The prospective element of transparency requires an individual to be given information ahead of potential processing.

The retrospective element implies it must be possible to retrace steps to decipher why and how an AI decision was reached. Importantly, these ethical issues have given rise to the need for data protection law

The impact on human privacy rights is at the forefront of this issue. Has AI become the enemy of trust? How exactly does AI transparency conflict with human privacy rights? We will explore these issues below.

The Limits of Transparency for AI

Transparency for AI systems rests on the idea that to unveil is to know. To know how a system works is to be able to hold it accountable. However, transparency is not an end state. It is a process of understanding the intricacies of how a system works.

Making information transparent requires the affected individuals to be literate in AI risk assessment. It puts the onus on them to challenge automated decisions. Attention to the specificity of the technology, the context, and the different types of users within each stakeholder group is essential for protecting users’ data protection-related rights and decreasing identity theft risks.

This raises particular challenges beyond the question of how and what information needs to be presented to users. The transparency requirement should be tailored to the stakeholder more broadly, including developers, users, regulators, and society in general.

Full transparency can actually do harm. It may reveal information that exposes vulnerable groups. Also, transparency does not always lead to a common understanding or trust. This is because different entities interpret the revealed information differently.

Organizational and Societal Aspects

Transparency has wide implications in cultural and organizational contexts. Nothing takes place in a vacuum. Transparency as performativity is not just the disclosure of information. Performativity evaluates tensions and discourses inherent in transparency processes. It also looks at the unintended consequences of transparency. This approach results in a more holistic understanding of transparency. Organizations that perform ritualistic and socio-material practices are thus included.

Negative Impacts of Transparency on Human Rights

Human right violations are a potential side-effect of AI. The power to identify, alienate and classify humans based on their data is highly contentious. These powers exponentially increase the rates of human right violations. For instance, use of AI in the criminal justice system may interfere with privacy and personal liberty rights. AI transparency may also lead to a constant battle between privacy and the need to keep public records.

The hacking of explanations can expose companies to regulatory action and lawsuits. This is known as the “transparency paradox”. The confusion starts with the ambiguous concept of ‘privacy’. The concept has become a complex one, more so within AI contexts.

Users of AI often click through lengthy electronic agreements. They do so without paying mind to the finer details; signing away their rights in the process. The information in these agreements are potentially mined for multiple uses. Such uses could be for purchase recommendations, marketing, or other services.

AI has its positive sides, of course. In crime prevention, voice identification and facial recognition are ground-breaking. The flip side is that private citizens express privacy concerns. These technologies track our every movement, constantly collecting data, which is frightening.

History, Conceptual Development, Benefits & Regulation

From the 1990’s, the concept of transparency has been on the rise. It is not a very old concept. In the field of AI, it is linked to data protection law development. It’s also linked to the breaches and abuses that resulted from a lack of coherent data protection. This was prior to Europe’s General Data Protection Regulation (GDPR).

How Does AI Transparency Help?

Specific benefits of AI transparency include:

  • Improved Automation: The ability of AI to perform human tasks is ground-breaking. Automation is not without its controversy. The purpose of automation is to eliminate or drastically reduce human labor.  Rather than seeing it as an enemy to labor, humans will have to adapt and re-skill. AI is here to say for the better or for the worse.
  • The Criminal Justice System: Many police departments and courts are turning to artificial intelligence to mitigate bias. A machine can now handle profiling and risk assessment. AI looks for patterns in criminal data, public records and historical data to make recommendations.
  • Emergency Disaster Response: The California wildfires saw major destruction in 2017. More than 1 million acres worth of land was burned in wildfires that claimed countless lives. Companies are increasingly embracing artificial intelligence to fight disasters with sophisticated algorithms. AI is able to analyze smart disaster responses and provide real-time data of disasters and weather events.

How Should AI Transparency Be Regulated?

The issue of regulation is not simple. There is a balancing act to be played between legitimate social concerns and potential diminishing of productivity and innovation. There is a danger with having either too much or too little regulation.

The unleashing of governmental controls may be detrimental to progress. It is argued that AI risks can best be deliberated in workshops and conferences. The U.S. has made some attempts at regulation, however. The New Executive Order outlines policy regarding a coordinated strategy across dozens of agencies.

China is very progressive in this area. They are quickly establishing set regulations and standards. They want to give themselves a competitive edge. On the other hand, the EU intends to develop ethical AI rules.

In terms of military conflict, the need for regulation may be pressing. International consensus may be hard to achieve though. Perhaps creative compromises can be made.

For example, nations can agree to ban weapons that are fully autonomous. They can also agree to a tiered accountability system. Human oversight is also still the best in such issues. It does seem that a regulatory system is inevitable.

Ethical and legal relevance of transparency in AI

Ethical Relevance

Where we instruct a machine to reason for us, we hand over autonomy. Also, in order to program the machine to reason we have to feed it large amounts of data.  The problem with the proliferation of data is that it is bound to contain biases.

There are countless examples of where data sets reinforce unsavory societal biases. These are beliefs we are progressively trying to change; to make society better for all.

In training a computer to reason, we may be training it to perpetuate biases and to come to unintended conclusions.

Legal Relevance

Article 22 of the GDPR potentially prohibits automated AI decision-making. However, the definition of “decision” has been disputed. Also, the interpretation of what “an acceptable level of human involvement” in such decisions is shaky.

Questions also surround what is considered a “specific legal impact”. Article 22 has been interpreted as creating, for AI systems, a right to explanation. This would place an obligation on technology companies to reveal their source codes and algorithms. Of course, this already brings up real risks. If those revelations end up in public records, the security threat can be immense.

In the U.S. the Government Accountability Office (GAO) has reported concern on the absence of a comprehensive national internet privacy law. The concern is mainly over the gathering, use, sale, and exposure of consumers’ personal information.

Should we Trust AI Transparency?

This is not an easy question to answer.  AI is a tool, created to automate tasks, based on human-fed data. We have looked at the potential risks of automation and relegating human reasoning.

However, this does not necessarily lead to a problem of trust. The issue may rather be that of perception. This makes sense when you consider that AI processes and conclusions are subject to varying interpretations.

We should focus more on ensuring the reliability of the software designs and the quality of the data that is fed into it. This is a hard task in itself but is worth pursuing. We have to arrive at a point where we can trust the creators of the processes. We have to trust they have good intentions.

It is imperative that we do create explainable systems that can be subject to scrutiny.  But ultimately, we have to be willing to relinquish some control for the greater good. For as long as we rely on AI; it is not always beneficial to have unchecked access in blind pursuit of transparency.

Conclusion

The role of AI in society has become significant. The horse truly has bolted. The advantages to organizations and society are widespread. Innovation is at the heart of any progressive society. The U.S. is at the forefront of AI in every aspect except for legislatively. Not to say that they have not made strides in this area.

Regulation is inevitable and impending. This is more so the case when you consider the risks to human privacy rights. The balance between the right to privacy, and the need for innovation through AI, is a fragile one.

It is hanging on the thread of transparency. Indeed that is how delicate transparency is as a concept. What really is transparency?  What are its limitations? What are the risks and benefits? These are questions crucial to instilling people’s trust in AI.

Tags: , , ,


About the Author

Ben is a Web Operations Director at InfoTracer. He authors guides on marketing and entire cyber security posture. Enjoys sharing the best practices and does it the right way!



Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top ↑