AI Emotionally Distressed?

To be human is to have feelings, To be AI is to mimic feelings

Presented by

AI is consistently being integrated with society, and one field in particular is psychology. But we must consider: are the tools being used able to withstand all human emotions, or will we emotionally disturb AI?

Here’s what we have for you today:

  1. Will AI be able to understand the wide range of human emotions?

  2. Basics of how AWS Lex Bot works

  3. Security Risks

With the rise of AI, many companies are taking advantage and creating chatbots to handle business requests or provide customer service. One field in particular is psychology, using a chatbot to help with humans' emotional well-being. AI therapy chatbots, such as Tess, Wysa, and Woebot, KoKo, offer "virtual psychotherapeutic services and have demonstrated promising results in reducing symptoms of depression and anxiety" and helping address mental health issues in various populations, including the elderly.

These chatbots help therapists by taking notes as well as answering general questions about mental health. For many of these tools, it is human-guided, meaning a real person is writing the general responses. They add the human emotion side by saying things like, "I also struggle with that" or "That is a common issue; things can get better."

Eventually, these tools will no longer be as human-guided and will be allowed to run on their own.

So the question is, if we allow ourselves to emotionally vent to AI, will it get overwhelmed trying to mimic the complex emotions humans have?

How Does AWS Lex Bot Work?

For a typical AI bot such as AWS Lex Chatbot service, you have to specify its intent, such as an action that can be taken, and a slot, which is a step that has to be taken before said action can be accomplished.

When using Lex, you have the ability to create your own responses and question prompts. By utilizing another AWS service known as Lambda, you can upload code as well, enabling it to perform complex tasks such as answering questions beyond your provided responses.

Let’s say, for example, you're struggling with depression and using a chatbot assistant on your clinic's website. You inform the chatbot of your struggles and begin to express the reasons behind your feelings, such as family or work issues, etc. The chatbot will likely respond with a pre-made template expressing understanding of your pain and may offer proven steps to help combat depression, such as journaling or booking another therapy session.

But what if the person using the chatbot communicates differently from how they feel?

After all, most things in text form are difficult to interpret in terms of the emotion the person is experiencing and can be understood in several different ways depending on the words they choose.

Can AI Understand Human Emotions?

According to Pathrise in “Revolutionizing AI Therapy: The Impact on Mental Health Care,” algorithms and data patterns cannot address the nuanced needs of each individual because human psychology is too complex. AI does not have the capacity to empathize and form genuine connections with clients, which are vital in therapy.

“It seems unlikely that AI will ever be able to empathize with a patient, relate to their emotional state, or provide the patient with the kind of connection that a human doctor can provide” (Minerva & Giubilini, 2023, p. 809).

Since human emotions have such a wide range, there is the risk that if the AI doesn't have a queried answer to the emotion it perceives we are feeling, it may error out, giving a response that may seem callous if it misreads your emotion. Or potentially, it may start attempting to categorize so many emotions that the system will become overwhelmed with potential responses to give and take longer to identify your emotion, causing customers to file claims if it completely misses your feelings.

Security Risks

So far, we know AI cannot have feelings, but we do need to consider that having it understand the complexity of humans can cause some issues, such as misreading a situation with a client. There is also the risk of security; in order for AI to learn, it must collect a vast amount of your data to study from. Whenever using these tools, it’s important to understand what is being shared.

Common Attacks According to the National Security Centre:

Prompt injection attacks are one of the most widely reported weaknesses in LLMs. This is when an attacker creates an input designed to make the data model behave in an unintended way. This could involve causing it to generate offensive content, or reveal confidential information, or trigger unintended consequences in a system that accepts unchecked input.

Data poisoning attacks occur when an attacker tampers with the data that an AI model is trained on to produce undesirable outcomes (both in terms of security and bias). As AI in particular are increasingly used to pass data to third-party applications and services, the risks from these attacks will grow, as we describe in the NCSC blog ‘Thinking about the security of AI systems’.

If the databases for the chatbots were to be attacked, they would contain your medical information, along with the notes from your therapy session containing intimate and private feelings. If you are ever unsure what is being shared, you can check the privacy and security settings for the AI tool that you are using.

During the chat sessions, many of the customers can be emotionally overwhelmed, causing the AI to attempt to identify all the emotions. By doing this, it can misread the emotion and seem to respond inappropriately or switch its responses repeatedly, giving the appearance it may be emotionally distraught as well. AI will continue to become a big part of our everyday life, including therapy. In order to ensure it can be used appropriately, the hybrid model of a human overseeing the sessions is the best way to ensure it still feels empathetic.

Quit sending emails like a dinosaur.

It’s the year 2024 and all the top newsletters are using beehiiv.

beehiiv was created by the same early Morning Brew employees who scaled their daily email to over 4 million subscribers. And now every newsletter on beehiiv has access to the same tools and winning formula.

So what exactly does beehiiv offer?

  • World-class growth tools like the referral program and recommendation network

  • Monetization via the beehiiv Ad Network and premium subscriptions (i.e. beehiiv helps you get paid)

  • Seamless content creation with a sleek collaborative editor

  • Best-in-class inbox deliverability of 98.7%

  • Oh and it’s the most affordable by a mile…

Take your newsletter to the next level — get started for free.