Hey Siri, Can I help YOU?

If humans do, machines should be able too.

Nuances, expressions, context, jargon, imprecision or social-cultural depth.

Human language is diverse, its own, and can lead to a wide spectrum of interpretations. Are machines capable of understanding what we mean?

And even more… Are they able to communicate back in a way that we can understand? If this is true, we have achieved one of the first objectives of Artificial Intelligence.

Human language is a challenging area for AI.

What’s the pattern to follow?

Photo by ray rui on Unsplash



Conversational interaction

One of the seven patterns of AI, among hyperpersonalization, autonomous systems, predictive analytics and decision support, patterns and anomalies, recognition systems, goal-driven systems and, what we are talking about today, conversational/human interactions.

What’s the main goal?

Allow machines to be able to interact with humans through human language patterns, and machines to be able to communicate back to humans in a way they can understand.

No more clicking, typing, swiping, or programming … We are too lazy.

Now we want machines to interact with us in the same way that we communicate with each other. This includes voice, writing, or whatever method our wired brain is capable of understanding.

There are three scenarios: Machine to human, machine to machine, and human to machine interactions.

Some examples are found in voice assistants, intention analysis, content generation, mood analysis, sentiment analysis or chatbots; developing solutions in cross-cutting sectors such as the financial sector or telemedicine.

Mood, intent, sentiment, visual gestures, … These shapes or concepts are already understandable to the machine.


Natural Language Processing

There’s a difference between ASR (Automatic Speech Recognition), STT (Speech to Text) and NLP (Natural Language Processing). While the first two, ASR & STT, are based on the transformation or generation of sound waves that are converted into words, the third one, NLP, interprets the data it hears. Not for this reason, AI (and Deep Learning) is no longer important in ASR & STT fields, since it has helped make speech-to-text more precise and text-to-speech more human.

However, Natural Language Processing (NLP) goes further than converting waves into words. We need to understand and provide understanding.

It’s here where we enter a huge world that ranges from the generation of speechs or texts, extraction and understanding of entities, detection and identification of topics or themes, connection of sentences, concepts, intentions and meanings.

Basically, UNDERSTAND.

Natural Language Processing is divided in TWO PARTS: natural language understanding and natural language generation.

Let’s see.


Natural Language Understanding (NLU)

NLU focuses on the interpretation of human input data such as voice or text to use it according to its intention. The MAIN goal is to understand the intention. How does it work?

AI uses different tools such as lexical analysis to understand the sentences and their grammatical rules to later divide them into structural components.

We use tools such as tokenization, lemmatization (a.k.a. stemming) or parsing to carry out these functions.

    Breaking up a string of characters into semantically meaningful parts that can be analyzed. No white spaces or comments.
    (a.k.a. stemming) Identify and return the root words of the sentence to explore various additional information.
    Determining the syntactic structure of a text. There are two different approaches: dependency parsing, words as nodes and displaying links to its dependents.; and constituency parsing, displaying trees with the syntactic structure of a sentence using context-free grammar.


Natural Language Generation (NLG)

We first understand, then we answer.

NLG is capable of preparing and making effective communication with humans in such a way that it does not seem that the speaker is a machine.

Broadly meant, it’s a software process that transforms structured data into natural language so it produces written or spoken narrative from a dataset.

To do this, as we said before, the machine first needs to interpret the content and understand its meaning and then respond and carry out effective communication. At a high level, we carry out the following steps:

  • Step 1: Content Determination, decide which information should be included in the text
  • Step 2: Text Structure, setting reasonable order of organizing the text
  • Step 3: Sentence Aggregation, combining pieces of information in the same sentence.
  • Step 4: Grammaticalization, adopting natural language.
  • Step 5: Reference Expression Generation, similar to step 4, but identifying the domain of the content.
  • Step 6: Language Implementation, building a well-formed complete sentence.

Basic Formula → NLP = NLU+ NLG

So, what’s next?

No more static content that generates nothing more than frustration and a waste of time for its users → Humans want to interact with machines that are efficient and effective.

Gone is the first ELIZA chatbot developed in 1966 that showed us the opportunities that this field could offer. However, current assistants such as Alexa, Google Assistant, Apple Siri or Microsoft Cortana, must improve when it comes to understanding humans and responding effectively, intelligently and in a consistent way.

They have no choice.

Let’s leverage the conversational pattern of AI.

Want to talk about your project?

Madrid, Spain

Email: hello@nodedge.ai

Phone: +34 697 94 76 64