TikTok’s AI Tool Blunder: No Filters, No Limits, All Chaos! Watch the Whole Mayhem Here!
June 28, 2024Say Hello to Dot AI: The Chatbot That Remembers Everything About You!
June 28, 2024The Rise of Emotional AI and Its Challenges
Emotional AI, designed to read and respond to human emotions, is gaining traction with companies like Hume leading the way. Hume’s AI can detect emotional cues from voice and facial expressions, aiming to enhance user interactions. However, experts like Prof Andrew McStay warn that understanding and reacting to human emotions is complex, and the technology’s accuracy and applications raise ethical questions (The Guardian).
Ethical and Bias Concerns
Despite its potential, emotional AI is fraught with problems. Prof Lisa Feldman Barrett’s research suggests we cannot accurately infer emotions from facial movements alone, challenging claims by many AI companies. Additionally, biases in training data can lead to discriminatory outcomes, such as disproportionately attributing negative emotions to certain racial groups. This issue is highlighted by the Algorithmic Justice League, which emphasises the need for ethical AI development.
Regulatory and Practical Implications
Regulation is struggling to keep pace with AI advancements. The EU’s AI Act bans emotional AI from certain areas but allows its use for identifying expressions, which can still lead to misuse. In contrast, the UK’s Information Commissioner’s Office advises against emotional analysis due to its “pseudoscientific” nature.
Read the full article on The Guardian
*An AI tool was used to add an extra layer to the editing process for this story