Unknown Unknown Author
Title: Why every tech company needs to integrate empathy
Author: Unknown
Rating 5 of 5 Des:
Humans have been building solutions that make us better, faster and stronger since we discovered that we could fashion tools out of rocks...
Wed-thumb-1

Humans have been building solutions that make us better, faster and stronger since we discovered that we could fashion tools out of rocks and twigs. Our earliest technological innovations focused on amplifying our physical skills, but once Alan Turing created an early example of the modern computer with his code-breaking machine, our attention increasingly turned towards processing information as quickly as possible to mimic human cognition. Today, everything from our watches to our thermostats are ready and able to process information far faster than our brains can.
Yet, with every new technological update touting faster speeds, more storage and exciting new features, consumers have grown to want and expect even more. Devices are expected to grow and learn to fit into the fiber of who their owner is, creating a partnership between technology and human that is as fruitful as possible. In order to do this well, we need to take things back to high school biology.
When we think of evolution, we think about our physical development from apes to Homo sapiens, adapting to our changing world. What we often forget is that one of the most critical parts of human evolution was the development of empathy. The ability to put oneself in another's shoes and to understand them on an emotional level has allowed us to form stronger connections that have bolstered our success as a species.
Humans evolved to have empathy to benefit our survival — and so too can technology through artificial intelligence. Empathetic functionality is the next phase in the evolution of technology. Here are the three main AI processes that are fueling this evolution.

1. Natural language processing

Tweetdeck-Language-Translation-thumbAt its core, empathy is about actively listening in order to understand the subtleties of the speaker, an action that can be mimicked with natural language processing (NLP). NLP is already widely used to benefit our lives in a number of ways. For example, the National Center of Tumor Disease uses NLP technology to better sift through free text to retrieve data, and everyone's favorite digital personal assistant, Siri, has brought NLP to the mainstream.
But as anyone who's ever ended up with a list of Mexican restaurants after asking Siri to exclude Mexican restaurants can attest to, she's not the best listener. Voice recognition has yet to evolve to the level of understanding it needs to. Many products on the market are not able to understand differing accents, thus requiring the user to change their natural speech patterns in order for the technology to understand them, which isn't exactly conducive to fostering empathetic interactions.
The next generation of NLP and voice recognition technology will need to fix this problem, creating software that better understands the implicit — as well as the explicit parts — of human communication.
Luckily, we’re already well on our way to this goal. Companies like Soundhound and MetaMindare building on first-generation audio assistants like Siri and making a serious investment in more empathetic NLP, identifying complexities in speech and creating technologies that identify conversational nuances like exclusions and negations. These innovations will be used more broadly in the future, offering the consumer more poignant, thoughtful responses that truly address a problem they're looking to solve.

2. Computer vision

Body language can tell you more about how a person is feeling than what the words that come out of their mouth. This is where computer vision comes in, a technology capable of interpreting our silent language of movements and expressions. Computer vision is already relatively popular — Facebook's Moments app uses powerful facial recognition technology to identify users from their faces and other body parts even when they’re looking away. Additionally, Xbox Kinect'smotion sensing abilities allow users to seamlessly interact with their games using gestures.
Computer vision can be used even more strategically to build more empathetic functionalities into our devices, allowing them to interpret the more implicit, physical parts of human interaction. What if your smartphone’s selfie camera used facial recognition technology to analyze your expressions, allowing the device to identify feelings from happiness to anger? Of if the camera on your smart TV could detect when you've fallen asleep on the couch, using that discovery to pause your show and turned itself off? Incorporating this powerful machine perception technology into our devices will make them more intuitive, allowing them to recommend actions or provide information based on how the user is really feeling, not just what they're saying.

3. Machine learning

Contrast between convenience of using computer technology and social networking and inconvenience when offlineOne of the key characteristics of having empathy is the ability to respond in a way that acknowledges another person's desires, rather than solely your own. To do this, you have to take your past experiences with a person into account and predict the best course of action. In technology, this is accomplished with machine learning. Airbnb is a good example of a company putting this to use, prioritizing machine learning as one of the key components of its search engine. It predicts the behaviors of renters to deliver more helpful search results, creating a better overall user experience.
Now imagine taking that functionality and applying it to your devices. What if your smartphone used weather data to determine that it would be cold and rainy and predicted that you'd be interested in something warm for lunch, recommending you order hot soup via Seamless?
What if it noticed that you typically call your grandma on Sundays, and started sending you weekly reminders to reach out? When incorporated into our daily devices, machine learning will help our products retain more relevant information about us and, in turn, respond and predict our needs more accurately.
I'll admit that AI has gotten a bit of a bad rap in the past. We've experienced a healthy dose of fear about it in our lives — worrying ever since Mary Shelley's Frankenstein that our creations will take over and turn on us.
With technological innovations outpacing legal policymaking, there are real ethical and privacy concerns to be taken into account when dealing with AI, particularly when it's harnessed by big businesses or government agencies. It's understandable that we fear our devices may one day not need us to survive, but creating technology that understands us doesn't mean we're all going to end up in The Matrix or even like Joaquin Phoenix in Her.
The somewhat scary truth is that most of us have already taken the plunge with our devices, relying on them for almost all our most basic needs. Many argue that we spend too much time plugged in, but I'm willing to bet that one of the reasons we do so is because most of the heavy lifting is still on the end user.
AI technologies are the secret to creating products that work with us proactively, not just for us, in a reactive manner. If implemented correctly, these technologies will work in the background to deliver the services that truly make our lives better, allowing us to spend less time on our devices and more time together. When you think of it that way, AI becomes a much easier pill to swallow.

About Author

Advertisement

Post a Comment

 
Top