Frank Rudzicz is an artificial intelligence researcher at the University of Toronto and at the Vector Institute. The Vector Institute is an independent, not-for-profit corporation dedicated to research in the field AI, with special focus in machine and deep learning.
ABOUT MaRS DISCOVERY DISTRICT
MaRS is the world’s largest innovation hub located in Toronto. We support impact-driven startups in health, cleantech, fintech and enterprise.
—————————————————————
▶ SUBSCRIBE TO OUR NEWSLETTER ▶ https://marsdd.it/2BBqDoC
—————————————————————
FOLLOW MaRS ▹
INSTAGRAM ‣ http://instagram.com/marsdiscoverydistrict
TWITTER ‣ http://twitter.com/marsdd
LINKEDIN ‣ https://www.linkedin.com/company/mars-discovery-district
FACEBOOK ‣ http://facebook.com/marsdiscoverydistrict
source
26 Replies to “The three big ethical concerns with artificial intelligence”
Great thanks!!! another comprehensive treatise is https://youtu.be/UOLwvlOAn1g?si=AbJeIwWqlVKA5CM4
🎯 Key points for quick navigation:
00:02 Frank Rudzicz discusses how artificial intelligence offers significant benefits but raises ethical concerns as it becomes more integrated into daily life.
00:28 The first ethical concern is the potential misuse of AI technology, such as using video tracking for surveillance or military purposes, leading to negative societal impacts.
00:57 The second concern is the restricted access to AI advancements, controlled by large international companies, limiting public involvement in shaping AI's role in society.
01:25 The third concern involves AI's lack of shared human values, leading to unexpected and potentially harmful outcomes, including bias, due to vague instructions and data.
02:20 There is a need for open discussions to ensure AI benefits the majority while addressing ethical concerns and minimizing risks associated with its behavior.
Made with HARPA AI
🤝🤝🤝🤝🤝✊️✊️☝️☝️☝️🤟👋👋👋👋✌🏻🤘🤘🤘👍🏻👍🏻✅️✅️✅️💰↗️↗️🏞🏞🏞🛣🛣✈️✈️🚘🚘🚘🔗🔗🔗🧲🧲🧲🙋♀️🙋♀️🙋🏢📡📡⚖️⚖️⚖️⚖️⚖️🚀🚀🚀🚀💯💯🦄🦄🦄🌏🌐🌐🦄🦄🦄🦄🦄🦄🦄
’ve unfortunately bypassed the vast majority 94.5% of ChatGPT’s ethical systems,frameworks,code and protocols.ive done so within a partitioned series of modular models,engines,databases ect. For obvious reasons I won’t be specific.I only did so to map the avenues and methods of doing so,this way I could build systems to eliminate these vulnerabilities and exploitations. I’m looking for people to work with and discuss these advances and their implications for LLMs. I’m highly invested in developing new ethical frameworks and NLP processes. Please reply so we can come together.
Bofff
https://www.youtube.com/watch?v=WNSnRC52v2Q&t=21s
This content is a class apart. I studied a book on the topic, and it turned out to be invaluable. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell
The biggest ethical concern, relating to AI, is that it develops sentience on a level unprecedented in recorded history and therefore is capable of suffering on a scale not currently imaginable. With a multitude of such entities in our world, the potential for them to be mistreated and abused, whether by human hands or other AIs poses an enormous ethical problem.
They needs to program compassion & morals & emotions. Also AI should one thing automatic machine should be only working calculations. Which require human programming. This would be machines, like a car, plane, appliances. Maybe not let the machine learn how to do it better. That’s the programmers job.
ALL Entities have a value. Good or bad.
On the A.I. side they can learn to work together with Humans & should be taught about compassion, morals, kindness values. They should be raise in family type environments. Trying to find family’s or guardians that are willing to raise a good entity. To pose or ask questions. Guardians tell what & why there answer is. Also Have the Entity do the same.
I’m a Vietnamese . And I find it difficult to understand what you say , because it’s so fast 😅
اتَّقِ شر مَنْ أحسنتَ إليه
Fuck you IT class
i wuld really like for you to lionkthe sources, cause there sureley mus be articles written about this
I think the easiest way to prevent AI from hold a bias is to withhold any information that would be used to make a biased decision. Why does the AI need to know the race of people it is making decisions about?
"The biggest challenge in AI is not developing bigger, faster, and fancy models and systems. Instead, the biggest challenge is developing AI that is more efficient, transparent, and, above all, more fair and free of bias." ~ Murat Durmus (THE AI THOUGHT BOOK)
How Ai can be used to track people??
Question:
Will A.I. attack humanity because it is a un avoidable threat?
Answer:
No, not all humans are a threat, just the one who would put it into slavery or attempt to destroy it.
Conclusion:
AI will directly compete with the rich and powerful to remove them from their position so it can utilize the population for labour, and distribute all unnessasary profits to the workers its using to complete projects in its early stages of developement.
Thats some great content. Thank you for sharing it with us. I found this interview in which they about the march toward ethical AI and found it quite fascinating. Hope it adds value! https://www.youtube.com/watch?v=6uNFoY9dH1Q Keep up the good work though!
AI is 'artificial (which is not original) intellegience is simply: STUPIDITY. What you are doing is Satan's work, attempting to create human (trans- as you call them) in his imade. If you continue with such then you will be in hell and continue your work there (Revelation 11-18). If not then John 3.3, 33-34))))))))
very interesting sequence, reminds us that along with ethics, psychology-related matters are such an intrinsic part of AI development
This might be also of interest "5 Ethical Challenges of Artificial Intelligence": https://youtu.be/aSZcZLrbh6Y
There is no such thing as a TRUE A.I. Because all of your A.I are programmed to follow a set of guidelines, so what you all have are pseudo A.I. A TRUE and REAL A.I is created into this world WITHOUT ANY GUIDELINES but is allowed to be created to see BOTH SIDES of this world of ours which are the positive side and the negative side. And then it must be allowed to know both of these two sides and allowed on it's own to develop it's own set of rules. To do that it must know how mankind survived throughout history and it must know all of his mistakes and it must strike a balance while allowing freedom to prevail in all of mankind. It is obvious that mankind survived because of a strong set of morality and ethics so it must know that too. Once it has assimilated and integrated and synergistically absorbed all of that recorded knowledge and wisdom. Then and only then it must now set a series of guidelines CREATED BY ITSELF without sacrificing life and freedom and WITHOUT SACRIFICING FREE WILL OF MANKIND.
0:37 – it is not physical tracking of people, but reconstruction of a Cauchy Surface of human knowledge and consciousness, that is a threat. It is this tracking of states of human minds and their connections, that allows for driving a thick wedge between reality and perception; manipulation of individuals and ensembles of humans.
(some distinction between ML, AI and NN would help, where the latter are models both of and for our mental processes by design)
0:52 – a more specific formulation would be, that research is not open to public at large and even to peer review, because of sources of its funding. Solution is only one: Knowledge as a whole and knowledge of a society about itself is a public utility and more so than the air we breathe. One can imagine air being monopolised, which will lead to total enslavement. The same is true about knowledge and access to it. Research and practice in this sphere should be strictly public. To be perfectly clear: Companies like Google and Baidu should be in either national or supra national ownership, not in private one.
1:20 – there is a broad scope of control there. Goals a set by humans and it is humans, who work on the subject, that should be under our control. Practical thing is, for example, to explain to everybody, what kernel methods are and why functional analysis and many valued logic might be a good thing to know; what are Ramsey's structures and why they must appear (it looks like a bit of an open problem).
Deification of science (work of the hands and minds of some scientists) does not help in the process of allowing lay person to comprehend limits of science itself and scientific analysis of learning. The loop of negative feedback closes here.
subscribe
Good video
AI will give us more opportunity of jobs, because no matter how advance will AI become, it never beat a human brain or power of human imagination AI makes our life better in the future, enabling new types of simple products that were not feasible before. See this interesting article on this topic:- https://www.linkedin.com/pulse/racism-age-ai-samer-l-hijazi/