2. QFM008: Irresponsible AI Reading
List February 2024
Here is everything I found interesting about Irresponsible AI during February
2024.
From the disappointment of the Glasgow Willy Wonka experience, which serves
as a metaphor for the sometimes stark difference between expectations and
reality in the digital age, to the recall of GM's Cruise driverless cars after a
pedestrian accident, these articles collectively underscore the complexities and
unintended consequences of integrating advanced technologies into everyday
life.
A common theme across the list is the tension between technological
advancement and ethical responsibility. Whether it's Google pausing its Gemini
AI's ability to generate images due to diversity errors, concerns over privacy and
consent with facial recognition technology at the University of Waterloo, or the
unpredictability of AI as evidenced by ChatGPT's temporary lapse into
nonsensical responses, each story reflects the critical need for more reliable,
interpretable, and ethically grounded technological solutions.
Interestingly, amidst these cautionary tales, a counter-narrative emerges from a
professor dismissing the fear-mongering around AI as irresponsible, reminding
readers of the importance of balanced perspectives on the potential and pitfalls
of artificial intelligence.
Enjoy!
Key
: Mentions AI
: Talks about irresponsible AI
: Talks about irresponsible AI in a real-world failure scenario
: Talks about technical details of irresponsible AI
: Discusses technical details and mitigation of irresponsible AI
Source: Image by DALL-E 2
2
3. Glasgow Willy Wonka experience called a ‘farce’
as tickets refunded: The "Glasgow
Willy Wonka Experience", intended as an
immersive chocolate celebration, was cancelled
and refunds issued after attendees, including
children left in tears, encountered a lacklustre
setup in a sparsely decorated warehouse, far
from the promised magical environment. This
story went from niche to mainstream very quickly
and is perhaps emblematic of how far away from
reality some of the outputs of generative AI can
be. Here's how people on Threads, Xitter, and the
BBC saw it. And here's how The House of
Illuminati (creators) saw it. You be the judge.
#WillyWonkaFail #GlasgowEventDisaster
#RefundChaos
#ChocolateExperienceGoneWrong
#EventLetdown
3
4. Fake: It’s only a matter of time until
disinformation leads to calamity:
This article discusses the growing concern
over disinformation and fake news,
highlighting historical anecdotes of art
forgery to illustrate the ease and potential
dangers of spreading falsehoods. It
emphasises the importance of discernment
in an era where technology makes creating
and spreading fake information easier than
ever, warning of the serious consequences
disinformation could have on society and
democracy.
#FakeNews #Disinformation
#ArtForgery #DigitalEthics
#CriticalThinking
4
5. 'Facial recognition' error message on
vending machine sparks concern at
University of Waterloo: Smart
vending machines at the University of
Waterloo are to be removed after students
raised privacy concerns over an error
message suggesting the use of facial
recognition technology without their
consent.
#FacialRecognition
#PrivacyConcerns
#UniversityOfWaterloo
#SmartVendingMachines #TechEthics
5
6. Google pauses Gemini’s ability to generate
AI images of people after diversity errors:
Google has halted the ability of
its Gemini AI to create images of people
due to errors in accurately representing
historical figures and diversity, leading to
the generation of misleading
representations. The company is working
on improving the feature for re-release.
#GoogleGemini #AIaccuracy
#TechEthics #DiversityInAI
#DigitalHistory
6
7. ChatGPT goes berserk: Late on
Wednesday 21st of February, it seemed like
ChatGPT briefly lost its mind. According to OpenAI
the the incident involved a bug introduced during
an optimisation attempt that affected how
ChatGPT processes language, leading to
nonsensical responses. This issue was quickly
identified and resolved. Gary Marcus discussed
the incident, highlighting the unpredictable nature
of AI systems and the importance of developing
more interpretable, maintainable, and debuggable
technologies. He framed the incident as a wakeup
call for the need for trustworthy AI, emphasising
the challenges of ensuring AI safety and stability.
As always, the Hacker News comments on each
article are enlightening.
#AIStability #TrustworthyAI
#ChatGPTConcerns #AISafety
#TechWakeUpCall
7
8. GM's Cruise recalling 950 driverless cars
after pedestrian dragged in crash:
Cruise is recalling 950 driverless
cars nationwide after an incident where a
robotaxi failed to stop in time, hitting and
dragging a pedestrian in San Francisco. The
recall, sparked by concerns over the
collision detection system, marks a
significant setback for GM's Cruise, amidst
growing scrutiny over its autonomous
vehicle technology.
#CruiseRecall #DriverlessCars
#AutonomousVehicleSafety
#TechSetback
#InnovationChallenges
8
9. DPD's new AI customer service chatbot
fails at handling queries: Instead
writing poems about the company's
incompetence and using swear words. Also
related: someone managed to find out the
System Prompt that they were using.
#AIFail #CustomerServiceFail
#ChatbotChaos #TechFlop
9
10. 'The fear-mongering around AI is
irresponsible', says professor:
The article discusses the viewpoint of a
professor who believes that the
widespread alarmism and fear-mongering
about artificial intelligence (AI) are
irresponsible. He argues that while AI
technology, such as generative AI, is
advancing, it is far from posing an
existential threat to humanity and
emphasises the importance of responsible
development and ethical use of AI.
#ArtificialIntelligence
#EthicalAI #TechnologyDebate
#AIResponsibility #FutureOfAI
10