Showing posts with label trustworthiness. Show all posts
Showing posts with label trustworthiness. Show all posts

Ethical Problems With Artificial Intelligence

Graphics blaskarna 

When new technologies are presented to the world we often get fascinated by the incredible possibilities it gives us.

Many of our problems get solved easily, and software and apps are made to use the technology in several ways. Both necessary and unnecessary tools are produced to fascinate us and make money on the technique.


Artificial Intelligence (AI), machine learning, and deep learning are no exceptions. 


This is an attempt at a thoughtful and ethical thought about issues around this growing technique we now celebrate and almost adore uncompromisingly, and its opportunities.


I can tell, I am one of them. 


I love all the possibilities it gives, from home automation to word processing.


I think it is too easy to fall for new technologies and forget what they can do to us, what future we can get if we rely too much on their trustworthiness, and what results from this technique brings us.


It still is a matter of input and output.

The data we have is the data AI uses, even though it is getting better and better in predictions.


Although AI has the potential to completely change the way we work and live, it also brings up ethical problems that need to be taken into consideration. It is critical to take into account the ethical implications of AI development and use as it becomes more commonplace in society.


An important ethical concern is the possibility of bias in AI. The data sets used to train AI systems frequently reflect the biases of the people who created them or the societies in which they are used. Due to this, AI systems may reinforce or even exaggerate pre-existing biases, producing unfair results or discrimination. For instance, it has been discovered that algorithms discriminate against female and minority candidates, and facial recognition software is less accurate at identifying people of color. It's crucial to make sure AI systems are trained on a variety of representative data sets and to subject them to ongoing bias testing and evaluation to address these problems.


Many of us thought we’d be riding around in AI-driven cars by now — so what happened?

Read this article from Ted Talks, explaining problems with self-driving cars.


Another ethical concern with AI is the possibility that it could be used in ways that hurt people or society. For instance, autonomous weapons systems that can choose and use targets without human intervention raise significant ethical questions regarding responsibility and the use of force. Similarly to this, the use of AI to automate specific tasks or make judgments that have a significant impact on people's lives, like lending or hiring, can have serious repercussions and needs to be carefully assessed to ensure that they are fair and just.


The potential for AI to be used to limit or influence people's freedoms is a third ethical concern. For instance, AI-powered surveillance systems can be utilized to track and monitor people's activities and movements, raising issues with privacy and civil liberties. Similar to how it can be used to stifle dissent or manipulate public opinion, AI has serious implications for democracy and the right to free speech online.


AI developers and users need to think about the values and principles that should direct the creation and application of AI to address these ethical concerns. This may involve values like accountability, fairness, transparency, and respect for human rights. Governments and other stakeholders should create and enforce ethical standards and guidelines for the creation and application of AI.


This article A Practical Guide to Building Ethical AI is discussing the importance of companies taking the ethical problem there is in consideration, and the result there can come to be if they do not.


Virtually every big company now has multiple AI systems and counts the deployment of AI as integral to their strategy,

said Joseph Fuller, professor of management practice at Harvard Business School in this article.


It is important to have ongoing discussions and debates about the proper application of AI in various contexts to address these and other distinct ethical issues. Experts from a range of disciplines, such as computer science, philosophy, law, ethics, and policy, may be called upon to participate in this. It might also entail the creation of particular ethical frameworks or rules to direct the advancement and application of AI in particular fields.


Particular ethical concerns emerge in various contexts where AI is used, in addition to these more general ethical worries. For instance, the use of AI in the healthcare industry to diagnose and treat patients raises concerns about the proper ratio of human and machine decision-making as well as the possibility that AI will eventually replace or supplement human healthcare professionals. Concerns about the potential for AI to maintain or even worsen existing educational imbalances arise when it is used in education to personalize learning or grade assignments. Questions about the potential for AI to replace human workers and the proper distribution of the advantages and disadvantages of technological advancement are raised when AI is used in the workplace to automate tasks or make hiring decisions.


It's important to take into account any potential long-term effects of AI. There is a chance that as AI systems advance and become more self-sufficient, they may outsmart us and even become a danger to humanity. The "AI singularity" or "AI takeover" scenario refers to this. Although it is challenging to foresee the likelihood or timing of such an event, AI researchers and policymakers must take into account the potential risks and come up with management plans.


Making sure AI systems are transparent and understandable is one possible tactic for controlling the risks associated with them. For us to understand how AI systems arrive at their decisions and spot any biases or errors, this means that their decision-making processes should be transparent and understandable to humans. Assuring AI's openness and explicability may also aid in fostering confidence in the technology and lowering the possibility of misuse or abuse.


Making sure AI systems are created with human values and goals in mind is another possible tactic. In other words, AI systems should be programmed to act in ways that are consistent with human values and that advance the common good. The creation of specific ethical frameworks or rules to direct the development and application of AI may be necessary to ensure alignment between AI and human values.


It's important to take into account the possibility that AI could be employed for evil, such as hacking or cyberattacks. It's critical to develop effective cybersecurity measures and to make sure that AI systems are safe and resistant to intrusions to lower the risk of malicious use of AI.


It is also important to think about how AI might affect employment and the economy. There is a chance that human workers could be replaced by AI systems as they develop and become more capable of automating more tasks, which would result in job loss and economic disruption. Creating policies and programs to support workers impacted by automation, such as retraining initiatives or universal basic income plans, may be necessary to reduce these risks.


There are significant ethical issues raised by the development and application of AI that require careful consideration. We can ensure that AI is used to benefit society and enhance people's lives all over the world by addressing these issues.

With its potential to revolutionize everything from healthcare and transportation to education and entertainment, AI is a rapidly expanding field. But as AI spreads, it's critical to think about the moral importance of its creation and application.



Artificial Intelligence and Ethics, a poem by ChatGPT and Colossyan AI Video Creator.


All in this video is created with Artificial Intelligence from chatGPT:s poetic words to Colossyans AI created video. 


Artificial intelligence, oh how grand

A creation of man, with a programmed command

But with great power comes great responsibility

And so we must consider ethics, for humanity's prosperity


For AI is just a tool, at the mercy of its makers

But as it grows and learns, its actions could potentially be shakers

Of the world we know, the morals we uphold

So we must be careful, our actions must be bold


We must consider the implications, of what we create

For the actions of AI, will reflect on our own fate

We must ensure that our values, are instilled from the start

For a machine that acts without a conscience, could tear us apart


So let us be mindful, as we forge ahead

In the creation of AI, let us use our head

For with great power comes great responsibility

And it is up to us, to shape the future of humanity.


This is a bit scary, are we not even necessary for a quite okay spoken word?


Skepticism is necessary both for human, and when you are artificial.

 



This is a list of fun or sad, your choice, facts about the "truth" we are spreading sometimes. I can not claim the words of what they are pleading, or if it only is from trolls, but as you sometimes can say, there's no smoke without a fire……

Why I am showing them is, I think, we have to have reminders that everything isn't always what it looks like, if one person says so, it isn't the whole picture.

At least I need that reminder sometimes.

Or if you are a skeptical person, what is a good personality, I think, you can always see them as a fun troll's lies.


From one who worked in the packing factory, and changed the boxes,

Kellogs frosted flakes and Store Brand Generic frosted flakes are the same.


A person who worked at an olive oil bottling plant in Rome, New York. He claims they had only one oil, but put it in 27 different bottles that sold at different prices. Some of the bottles make the declaration to be imported and aged. Some individuals made virgin or extra virgin claims. a few were cold-pressed. One brand sold 12 ounces for $30, while another offered 128 ounces for $12.


Funeral directors will undoubtedly take advantage of those who are grieving because funeral homes are businesses.

The cremation boxes are what a person finds to be the most repulsive. They should cost less than $100 and are essentially just large cardboard boxes. However, they also produce extremely pricey boxes, and directors will remark that "grandma would feel more comfortable in this." She won't because she is no longer alive. These boxes can cost up to $1,000 each, and they are all naturally burned.


Wine is not vegan. In some cases, it's not even vegetarian.

Egg whites and, occasionally, isinglass are used in the fining process (fish parts). People would enter the tasting room at a vineyard and say, "I'm vegan, but thank God I can still drink wine, am I right?!" 


Wearing gloves in kitchens isn't protecting you from anything. They're more prone to spreading germs and filth because people don't wash them between touching different kinds of food. They exist to give the illusion of safety and professionalism. As someone who's worked in kitchens, I'd much rather see a cook wash their hands than throw a latex glove on.


Flight attendants and pilots are not paid during boarding, deplaning, or delays; I'm not sure if this is a secret. They are therefore impatient and irate when you are. Even though they are losing money, they still have to be there.


The food on your plate is often hastily prepared with stress and hatred by a cook who was either yelling at someone or being yelled at. It isn't always prepared with love or care.


The man who claims this was working in a theater. He tells that the cost of a large bag of popcorn for a customer is $5.99 (at the time) and costs about 6 cents to produce including the cost of the butter, the kernels, the bag, the power used by the popper, and the time it took the concessionaire to fill the bag and hand it to the customer.


An employee in the wedding business was claiming that they had marked up the cost of every service you order for your wedding because they know you'll spend it.


Most of the clothing you find at an outlet store is not "cast off" or an overstock item from the main store. A completely different organization creates and manufactures clothing at outlet prices, but of lower quality.


And why am I spreading these rumors? 

These are examples of the truth some people are delivering.

Some of them could be right, some wrong, on the whole.

Even if some words here are for a particular company, they can't be a true word for every company in the same niche. 

There are serious companies out there too, I think.

But these words can suddenly have made a standard for every company, bias could have sneaked in and made the truth into how all companies are.


This could also happen in algorithms in artificial intelligence systems.

Algorithms don't have the kind of consciousness we humans sometimes have, or at least should have. AI systems lack subjective experiences or self-awareness because they are designed to process and analyze data, perform tasks, and make decisions based on that data.


One of the major issues with algorithmic bias is you may not know it’s happening” 

                               Joy Buolamwini, a researcher at the Massachusetts Institute of Technology


Every AI technology is created using knowledge, recommendations, and other input from human experts. Because every human is born with some sort of bias, AI is no different. Systems that are frequently retrained, such as by using new data from social media, are even more susceptible to unintentional bias or malicious influences.


When machine learning software is finished, it appears as though it can learn on its own. To enable the incorporation of fresh information and data into the subsequent learning cycle, experienced human data scientists, however, frame the problem, prepare the data, choose the appropriate datasets, eliminate potential bias in the training data, and, most importantly, constantly update the software.


The American Civil Liberties Union discovered in 2018 that the face recognition software used by police and court departments across the US, Amazon's Rekognition, exhibits AI bias. During the test, the software incorrectly matched the mugshots of 28 members of Congress with those of criminal suspects, and 40% of these false matches involved people of color.


Even though this is an old example, and AI has become better at analyzing data, this problem or similar also will happen in the future.

The poor quality of the data used to train AI models is a frequent cause of bias in AI being replicated. The training data may reflect unfair social or historical practices or include human decisions.

If we don't have transparency and possibilities to regulate machine learning systems, we could lose control of further problems they can produce.




Source: 30 People Reveal Industry Secrets About Their Jobs That Common People Aren’t Supposed To Know


LEGO Star Wars Nebulon-B Frigate Interlocking Block Building Sets 77904 LEGO Star Wars Midi-Scale Imperial Star Destroyer (8099)
LEGO Star Wars: Slave I - 1996 Piece Building Kit [LEGO, #75060, Ages 14+] LEGO Star Wars TIE Fighter Attack 75237