Ethics Presentations
Author @Saief1999
We've been asked to present some ethical issues in class which were later on included in the exam. This is a resume over the different presentations.
1. AI Art and ethical concerns
What Are GANs
GAN (Generative Adversarial Networks) is an unsupervised learning task in machine learning found in 2014
It involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset.
It uses two neural networks, pitting one against the other (thus the “adversarial”) in order to generate new, synthetic instances of data that can pass for real data. They are used widely in image generation, video generation and voice generation.
Use cases of GANs:
- Text to image translation
- Face filters
- Image to image
- Image to text
- Music generation
AI Art is the artwork generated with Artificial Intelligence based on specific input.
AI Art solutions:
- DALL-E 2
- Deep Dream Generator
- Craion
- MidJourney
- Imagen
Dall-E
created to “create original, realistic images and art from a text description.” The user interface is simple: the user enters a string of words describing the image they have in mind. Using the information collected from the command, the AI generates a product, referencing thousands of preexisting works of art that fit the requested aesthetic of the user.
Deeper meaning and the presence of an artist’s complex thought process are some of the most important elements of art. Without these factors, AI art wouldn’t easily pass as “traditional” human-made art. This is where deep learning algorithms become convenient.
What Dall-E is missing
Unlike human artists, who often use imagery to illustrate a theme, DALLE has no sentience. Even with complex deep learning algorithms, AI has no capacity to feel emotions intensely or understand the human psyche. While useful in creating “superficial art” that is visually stunning, AI falls short when it comes to producing art that conveys complex emotions.
DALLE is unable to properly generate realistic human faces, despite using approximately one million images as data. AI GENERATED ART ethical concerns These lapses in the AI’s judgement prove that DALLE, Mid journey, and other platforms are unable to replace photography and art due to their complexities.
Ethical Issues & Concerns
Is AI Art considered an art or a GAN product?
- An artwork made by Artificial Intelligence (AI) won first place at the Colorado State Fair’s fine arts competition on August 2022, sparking controversy about whether AI-generated art can be used to compete in competitions.
Who is the real author of the work?
- the original style artist
- The AI model owner (probably this)
- The user
How AI Art can be biased
- AI image generators may contribute to discrimination by reproducing harmful stereotypes acquired through data collections containing real life biases. To some extent, this concern can be mitigated through technological means. For instance, biases can be limited through supervised machine learning (e.g. by weighting particular data more or less), certain words can be banned from prompts, and certain words can be suffixed more or less randomly to inputs (such as –woman or –person of color) to achieve greater representation.
AI image generators can be used to generate misinformation
For example fake images can be used by scorned individuals to harass their former partners.
- Elon musk child labor
- Qatar pride flag
Artists say AI art technology uses their work without compensation
- Using online publicly available pieces of art for training
- In the UK, where Stability.AI is based, scraping images from the internet without the artist’s consent to train an AI tool could be a copyright infringement, says Gill Dennis, a lawyer at the firm Pinsent Masons.
2. Digital divide and ethical concerns
At a high level, the digital divide is the gap between those with Internet access and those without it. But the digital divide includes many factors such as access, affordability, quality, and relevance.
- Availability: Is there available access to the Internet in your area? Is there a nearby point of connection to the Internet?
- Affordability: Is access affordable? How does the cost compare to other essential goods?
- Quality of service: Are the upload and download speeds sufficient for the needs of users?
- Relevance: Does the connected community have the necessary skills and technologies?
- Additional divides: Digital literacy, and access to equipment…
The Five dimensions of Digital divide:
- Access to hardware, software, and connectivity to the Internet.
- Access to meaningful, high quality, and culturally relevant content in local languages.
- Access to creating, sharing, and exchanging digital content.
- Access to educators who know how to use digital tools.
- Access to high-quality research on the application of digital technologies to enhance learning.
Problems
- Digital divide will exclude individual learners and citizens from larger, cultural, educational conversations.
- According to a study, the overall disconnection of student related to the lack of digital skills or inability to access digital learning will lower their GPA by 0.4 which will result in a 4%-6% lower potential earning in the future.
- Schools with smaller budgets, lack of IT support and unreliable equipment means that students are not provided with the same learning opportunities.
Ethical Concerns
- For those who don’t have access and are in need of technology: Who is going to pay for that technology?
- Will there be help from the government in providing technology?
- Will taxpayers be responsible for helping support this change?
- Are there already funds in place that can help improve access?
- If individuals don’t know how to correctly use the technology: How will the public become educated on this topic?
- Who is now going to pay for an educator in this field?
- Will the individuals who benefit from the trainings be required to pay?
Solutions
- Collaboration between governmental, educational, and private sectors. If these three entities are able to work together, students will be far more equipped to handle digital learning.
- Create educational resources that are culturally relevant and available in several different languages.
- Community networks can bring affordable internet access to those who need it the most.
- Teachers should:
- be aware if their student's access to the internet.
- Keep up to date with the newest innovations ( to minimize the GAP with other more modern schools ).
reference : What is the Digital Divide?
Conclusion and final pointers
The digital divide, as a whole, remains an enormous and complicated issue - heavily interwoven with the issues of race, education, and poverty. The obstacle, however, is by no means insurmountable if broken down into specific tasks that must be accomplished. Aside from the obvious financial barriers, the following would help narrow the gap.
1. Universal Access
the government should subsidize Internet access for low-income households.
the private sector must commit to providing equal service and networks to rural and under served communities so that all individuals can participate.
2. More Community Access Centers, Continued Support of Those Already Existing
According to data collected in 1998, minorities, individuals earning lower incomes, individuals with lower educations, and the unemployed - the exact groups affected most by the digital divid
In fact, those using the CACs "are also using the internet more often than other groups to find jobs or for educational purposes" (NTIA Falling through the Net 99). Community access centers, therefore, are clearly worthwhile investments.
3. Additional, Well-Trained Technical Staff
Computers and other technologies alone are not enough. Communities and schools must train and preserve additional, and more qualified staff, alongside new technologies to promote the best application of resources. In addition to understanding the new technologies, the staff must be able to teach others.
4. Change of Public Attitude Regarding Technology
At the same time, much of society needs to change its attitude concerning technology. Rather than perceiving computers and the Internet as a superfluous luxury, the public should view them as crucial necessities. The public must come to realize the incredible power of new technologies and embrace them as tools for their future and the future of their children.
3. AI in China and ethical concerns
Introduction : 1984 Novel
Nineteen Eighty-Four is a famous novel written by George Orwell. It revolves around a world living in a dystopian future where utilitarianism rains, individualism is dead, and reality alongside history are just a matter of opinion.
Constant surveillance of every citizen is the norm. The party has a monopoly on facts and political discourse. There is control of information to the point where facts are not reality and reality can be changed at the whim of Big Brother and the "party".
every citizen is under constant surveillance by the authorities, mainly by telescreens (with the exception of the Proles). The people are constantly reminded of this by the slogan "Big Brother is watching you": a maxim that is ubiquitously on display throughout the novel.
A. The golden shield project
The golden shield project is the Chinese nationwide network-security fundamental constructional project by the e-government of the People's Republic of China. This project includes a security management information system, a criminal information system, an exit and entry administration information system, a supervisor information system, a traffic management information system, among others.
- Site blocking
- Topic filtering
- Search result rearrangement
- Mass surveillance
Each company in China is required to hire "content moderators" or what's better known as "sensors". The official number of those workers is 2 millions.
Complaining about the government is allowed and most of the time turned a blind eye to it but mobilizing people through a rally is strictly censored.
B. The Social Credit System
System developed in china, where good actions raise your score and bad actions lower your score. Your score can be looked up online to judge you.
It is based on :
- Financial and criminal records
- online search history
- social media posts
The government will score you based on what it deems better social behavior and a bad score will make your life harder:
- Ban on using public transport
- Ban on flight and exit visa (28 million banned sales of airplane tickets in 2021)
- Losing access and children's access to university
- Ban on dating apps
Data is collected using 620 Million CCTV cameras nationwide
C. AI in China
Following the exponential growth in technology within the previous years, the Chinese government has collected a tremendous amount of data alongside personal identifying information for every citizen
Use Cases
Jaywalking : A camera will take a photo of you, identify you instantly and issuing a fine alongside a public display of your face on the screen of the crosswalk
Toilet paper waste control : In a lot of public places you can not get toilet paper without identifying your face
Office Monitoring : Cameras enable employers to check employee entry and leave time, their mood and their productivity
Crowd control : AI can monitor crowds for their size and the period each person has stayed in the same zone detecting unusual activities (protests) in real-time allowing action to be taken immediately with minimal damage.
Skynet Project :
- Megvii: A rising tech company specialized in video stream processing and cross-checking facial data against enormous government databases being mainly oriented at criminal databases.
- 1.4 Billion dollars funding mostly from the Chinese, UAE & Russian governments.
- 3000 cases have been caught in 2021 using this system.
Education :
- Certain private experimental schools are using headbands and cameras to monitor students during class. Certain devices can check brain activity through their electric signals and the data is sent to the teacher's computer in real-time making him able to distinguish who's paying attention and who is not using AI-based pattern recognition.
4. Explainability in Machine Learning and AI : The birth of XAI
A. What is XAI
Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and understand the reasons and logic behind decisions created by machine learning algorithms.
- Prediction & Accuracy (LIME)
- Traceability (Deep LIFT)
- Decision understanding (Dashboard, graphs)
We can use the SHAP framework to draw graphs/plots that help us explain our model further.
B. Why XAI
- AI systems are increasingly used in sensitive areas - Medical & military fields for example.
- Exponential advancements in AI may create existential threats
- Regulations impose a certain degree of reasoning.
- Eliminating historic biases from AI systems requires explainability.
- IBM's AI Fairness 360 toolkit, launched in 2018, is an open-source software toolkit that can help detect and remove bias in machine learning models. For example, a bank would want to know how its model's predictions will affect different groups of people (such as ethnicity, gender, or disability status).
- Automated business decision making requires reliability and trust.
=> The more accurate the results. The harder it is to explain them
![[Pasted image 20221231143230.png]]
C. Advantages of XAI
- Building user trust since the user knows the reason for every result
- Satisfying legal requirements - whether regulations made by EU-GDPR or US-DARPA. (Compliance)
- As Software engineering students, we can not disagree that XAI actually facilitates debugging and maintaining our AI models.
- Reducing cost of mistakes, especially in super critical fields.
D. Challenges of XAI
- Inability to understand explanations.
- Algorithms and data are all too often not static.
- “right to explanation” such as that discussed with the GDPR may not be feasible.
E. How can we make machine learning models explainable?
One approach is to avoid highly opaque models such as Random Forest, or Deep Neural Networks, in favour of more linear models. By simplifying architecture you may end up with a less powerful model, however the loss in accuracy may be negligible.
-> Sometimes by reducing parameters you can end up with a model that is more robust and less prone to overfitting. You may be able to train a complex model and use it to identify feature importance, or clever preprocessing steps you could take in order to keep your model linear.
F. XAI in the cloud
Increasingly, companies are using cloud-resident tools to build AI models. Companies also need to have trust and transparency in their cloud services and the AI models that are produced by these services.
In November 2019, the company announced Google Cloud AI Explanations. Explanations quantify each data factor's contribution to the output of a machine learning model. These summaries help enterprises understand why the model made the decisions it did. Organizations can use this information to improve their models further or share useful insights with the model's consumers.
-> As Frey( the Director of Strategy for Cloud AI at Google ) noted in a blog post, "Of course, any explanation method has limitations. For one, AI Explanations reflect the patterns the model found in the data, but they don't reveal any fundamental relationships in your data sample, population, or application."
The company also introduced what it calls Model Cards starting with cards for Face Detection and Object Detection within its Cloud Vision API offering. Model cards" are short documents accompanying trained machine learning models that provide practical information about models' performance and limitations." The goal of the cards is to help developers make better decisions about what models to use for what purpose and how to deploy them responsibly.
=> Business leaders and consumers need to be able to trust the outcomes of advanced artificial intelligence models. Ideally, a company wants to embed explainability into the initial design of a model. However, it's equally important to explain the outcomes of existing models to ensure fairness, check for consistency and enable enhancements in the retraining cycles.
5. Self-driving vehicles and ethical concerns
A. Advantages
- Greater Road Safety: Government data identifies driver behavior or error as a factor in 94 percent of crashes, and self-driving vehicles can help reduce driver error.
- Greater Independence: People with disabilities, like the blind, are capable of self-sufficiency, and highly automated vehicles can help them live the life they want.
- Saving Money: Automated driving systems could impact our pocketbooks in many ways. HAVs can help avoid the costs of crashes, including medical bills, lost work time and vehicle repair.
- More Productivity: In the future, HAVs could offer the convenience of dropping vehicle occupants at their destination, whether an airport or shopping mall, while the vehicle parks itself.
- Reduced Congestion: HAVs maintain a safe and consistent distance between vehicles, helping to reduce the number of stop-and-go waves that produce road congestion.
- Environmental Gains: HAVs have the potential to reduce fuel use and carbon emissions. Fewer traffic jams save fuel and reduce greenhouse gases from needless idling.
B. Ethical concerns
- What do you choose? the car that is programmed to save as many lives as possible or the one that save you regardless of the situation?
- Is it okay for you if the car starts analyzing and factoring in the passengers of the car and the particulars of their lives?
- Could it be that a random decision can be better than a predetermined one? who will be making these decisions anyways?
- Who will be held accountable for an accident?
C. The Moral Machine
Curious to see if the prospect of self-driving cars might raise other ethical conundrums, Rahwan gathered an international team of psychologists, anthropologists and economists to create the Moral Machine.
This platform gathered almost 40 million moral decisions, taken from millions of online participants across 233 countries and territories from all around the world.
This work done by the Moral Machine team provides new insight into how morals change across cultures which is so relevant to IA and self-driving cars.
There are three fundamental principles that holds true across the world:
- Save Human ( over animal )
- Save the children (Over elderly)
- Save the greater number of humans
Exceptions
- Eastern countries seem to be more respectful of older people over children.
- The french and the french sub-cluster showed an unexpectedly strong preference for saving women over man.
- the higher the economic inequality in a country the more people were willing to spare the executives at the cost of the homeless people.
![[moral_machine.png]]
Conclusion
Unlike humans, autonomous vehicles can act in an instant and evaluate all sensory information involved before a car accident. They can be programmed to act a certain way based on this sensory information, naturally producing the ethical dilemma: should cars be programmed to save certain lives based on a predetermined value?
The answer is no, as all people are created equal and doing so can ultimately result in a group of individuals being unfairly targeted, which is both unethical and illegal. While self-driving cars may initially seem like the future of driving, programming them ultimately compromises the ethical obligations that engineers have.
6. Digital literacy and data governance
A. Introduction
Data literacy: “The ability to read, write and communicate data in context, including an understanding of data sources and constructs, analytical methods and techniques applied — and the ability to describe the use case, application, and resulting value.
Data governance: The government's processes and policies to gather, store, manage and dispose of data. The connection is evident. Governments that don’t fully understand their data will fail to manage it throughout its life cycle.
Being a data literate means we have to be willing to play detective every time we're presented with a data finding.
What kind of question should we ask when approaching Data literacy?
- Where did the data come from?
- Who analyzed the data?
- What is Missing with Data analysis
B. Weighing the complexities of data governance
B. 1. Algorithmic bias
Our systems have a problem: bias and the risk of discrimination. Their algorithms are value-laden by nature, reflecting the life and background of the engineers who build them . typically white males from high income countries.
Not only can algorithm design engender bias, but the transfer of services and the emergence of new digital products can also end up replicating or amplifying existing inequalities. For example, a new study of the most popular object-recognition algorithms found that 10 percent more errors were made when the algorithms were tasked to identify items from a household with a lower monthly income.
Moreover, these algorithms were 15 to 20 percent better at recognizing objects from the United Kingdom or the United States than those from Burkina Faso, Somalia, or Nepal.
B. 2. Data concentrations
The world’s largest companies rely on data to drive their business models. Alphabet, Facebook... They enjoy significant competitive advantages that come from owning massive data sets. However, this concentration of data within a limited number of corporations poses a challenge by limiting possibilities for the extraction of public value from data.
B. 3. Overcoming a lack of data literacy
Although awareness of the consequences of sharing data with public and private organisations is on the rise, possible secondary uses of data are not well understood by most.
For instance, justifications for collecting and selling data — otherwise known as privacy policies — tend to be excessively verbose and full of jargon, making them nearly impossible for the average internet user to understand.
B. 4. Avoiding the transparency fallacy
As AI becomes more sophisticated, it will become more difficult to explain in an understandable way. As the complexity of algorithms increases, rights to greater transparency might turn counterproductive if citizens lack sufficient data literacy to exercise those rights. The problem is not only faced by citizens; even programmers struggle to understand or explain the decisions taken by some neural networks.
“Relying on individual rights to explanation as a means for the user to take control of algorithmic systems risks creating a transparency fallacy. Individuals are no empowered to make use of the kind of algorithmic explanations they are likely to be offered; they are mostly too time-poor, resource-poor, and lacking in the necessary expertise to meaningfully make use of these individual rights.” — Edwards & Veale
B. 5. Assigning accountability
Some argue that machine learning algorithms must be considered moral agents with some degree of responsibility.
B. 6. Digital twins and the erosion of moral autonomy
Moral autonomy refers to one’s capacity to present one’s own identity to others and to resist attempts to stereotype one’s choices and biography.
In other words, when we can choose how we want to be and work towards that identity, we can resist external pressures that try to categorize us.
However, this effort to shape one’s own identity based on moral values becomes threatened when data collectors have already profiled us based on data points gathered about us — sometimes referred to as our “digital twin.”
C. The challenge of global data governance
The fact that attitudes toward privacy, data governance approaches, and technological development strategies differ widely across regions poses a challenge to the development of transnational data governance mechanism.
USA : Market centered China : Innovation centered Europe : Human centered
Case of Kenya's voter register
Kenyan law requires the voter register to be published prior to elections. The data included: ID, date of birth, gender, full name and voting area and was published online.
Months after the election, reports indicated that political actors had obtained voter register data and used it to send targeted messages to voters, in some cases manipulating them. Reports also pointed to political parties having obtained the entire voter register.
Two data protection bills were passed by parliament to better ensure data protection and comprehension.
D. Toward better data governance
- Enforce accountability mechanisms for unethical data use
- Create favorable conditions for a private sector shift toward ethical technology development
- Address gaps in data literacy by developing and distributing educational programming for online and offline users
- Promote a diverse and interdisciplinary AI workforce
Conclusion of presentation
To conclude, only through experimentation and evidence-based policy can we move from reflection to action in our quest for a more equitable and inclusive digital future.
7. Fake News & Deep Fakes
Generally speaking, Fake news is false narrative that is published and promoted as if it were true.
Social media has now created an environment where anyone with an agenda can publish falsehoods as if they were truths. People can be paid to post fake news on behalf of someone else or automated programs, often called bots, can publish auto-generated fake news.
A. Types of Fake news
Claire Wardle, re frames fake news as information disorder, a spectrum that ranges from falseness to intent to harm. it includes:
- Misinformation: Some spread false information without the intent to spread harm. People spreading misinformation believe it to be true before sharing it with others.
- Disinformation: People may spread information to cause harm or manipulate people. Disinformation describes actual lies that people tell for money, influence or to cause disorder.
- Malinformation: Information that may be true but is spread with malicious intent or taken out of context.
B. Dangers of Fake news
While some examples of fake news seem innocent, a lot of fake news can be damaging, malicious and even dangerous.
- Malinformation’s dangers are blatant. For example, publishing a person’s private address. But The potential dangers of misinformation and disinformation are more subtle.
- Fake news is created to change people’s beliefs, attitudes, or perceptions, so they will ultimately change their behavior.
- Misinformation and disinformation can also pose cyber security concerns. Fake news articles can be entry points for hackers attempting to steal your information.
C. How to spot Fake News
- Consider the Source: Think about the actual source of the news. What does the source stand for? What are their objectives?
- Supporting Sources: Look at the sources cited in the article. Are they themselves credible? Do they even exist?
- Multiple Sources
- Check the Author: Who is the author? Research them to see if they are a credible author?
- Check the Date: Ensure the publication date is recent and not just an older story rehashed.
- Comments: Even if the article, video, or post is legitimate, be careful of comments posted in response. Quite often links or comments posted in response can be auto-generated by bots or by people hired to put out bad, confusing, or false information.
- Check Your Biases: Be objective. Could your own biases influence your response to the article?
- Check the Funding: Even legitimate publications have sponsors and advertisers who can influence an article or source. Check if someone funded the article and if so, find out who paid for it.
D. Deep Fakes
D. 1. Introduction
=> Recent advancements in Artificial Intelligence (AI) has created a perfect storm to democratize the creation of deep fakes for distribution at scale via social platforms.
Face Swap (using openCV): When applied correctly, this technique is uncannily good at swapping faces. But it has a major disadvantage: it only works on pre-existing pictures. It cannot, for instance, morph Donald Trump’s face to match the expression of Ted Cruz
Deep fakes : morph a person’s face to mimic someone else’s features, although preserving the original facial expression.
Creating Deep fakes :
- Extraction : the process of extracting all frames from video clips, images, ..., identifying the faces and aligning them to create a dataset.
- Training : the process which allows a neural network to convert a face into another.
- Creating :
- Once the training is complete, it is finally time to create a deep fake. Starting from a video, all frames are extracted and all faces are aligned. Then, each one is converted using the trained neural network. The final step is to merge the converted face back into the original frame.
- Does not use machine learning (uses an algorithm).
D. 2. Benefits of deep fakes
- Although deep fakes could be used in positive ways, such as in art, expression, accessibility, and business, it has mainly been weaponized for malicious purposes.
D. 3. Harms of deep fakes
- Deep fakes can harm individuals, businesses, society, and democracy, and can accelerate the already declining trust in the media.
- Legal Aspects : Using this technology to create non-consensual immoral content is not technically a crime yet, but could fall into the category of revenge immoral content. However, face-swapping children or underage teens into immoral scenes is indeed a crime.
- Sometimes, videos of people are used without their consent.