What is natural language processing? A beginner’s guide to AI.
What is AI exactly, and how do we use it every day?
Paul Kelly (D/A) and Dr James Piecowye (Zayed University) are joined by Diego Moreno, a senior data scientist for D/A to discuss what AI means, how we use it every day and how natural language processing, in simple terms, works.
You might be surprised to see how it works in everyday applications and the implications for the Arabic-speaking world. There’s also a discussion about how AI is misused to confuse us sometimes, and how when you make it simple, we understand the power a lot better.
Listen to the full recording below or on your favorite podcast platform.
Transcript:
Dr James Piecowye: Hello, my name is James Piecowye.
Paul Kelly: I am Paul Kelly
James: And this is ‘Know your audience’. Okay, Paul, we’ve covered a lot of ground so far. And I’m a little confused. No, I’m not confused. I have more questions. We’ve been talking about why audience sentiment and collecting the data to make the necessary observations from it is so important for giant companies, right down to the small mom and pop operation happening at home. How does it do it? We’re now delving into the world of programming. We’re delving into the world of computers, we’re delving into the world of math, and we’re delving into the world of AI, all of which… That’s it! And as soon as those points converge, it’s now totally out of my ability to make any assessment of whether or not a tool that uses natural language processing, which involves, you know, huge amounts of data and making decisions and computers helping to do that.
Now it’s, I don’t understand it, how do I really know what’s doing it? And how does it work? And that’s where I’m at?
Paul: Yeah, well, it’s fair enough to be there. I think what we’ve been trying to do over the last few episodes is really have our listeners think about what it means to them in their daily life and what and try and demystify, I guess the buzzwords and charlatans talk around artificial intelligence and sometimes it’s a tad overblown, actually what it is, which is effectively a tool to process a lot of information quickly and help make humans make decisions or help or have a computer make a decision that has the best probable outcome for whatever that thing is, whether it’s medical science through to cars, on a road, to your toaster..
There’s a lot of ability in there to utilize it. But I think platforms themselves, for instance, on social media, and things will always say, I’ve got the most watched, I’ve got the most listens, and most impressions. It doesn’t mean a thing, you know, it just sounds impressive. It’s so you buy more advertising from them. It’s time to demystify this stuff. So more people use it. And we, together, can grow. Not even just an industry, but people using this in their everyday life and understanding and not being scared of technology and computers didn’t take away the jobs, they changed the jobs. They change efficiencies and things like that. So everybody needs to think in those lens and that’s why we’ve sort of today brought our head data scientist in, Diego, to really start to mystify one of these parts of AI, which is natural language processing, so that we can sort of just start to explain how some of these things work in a simpler way as possible.
James: So we’re at a point now where we’re gonna give you some context. This is really interesting, this where we want to be moving forward from.
Paul: The understanding of the general public, I think, is a barrier to adoption of technology in any specific. Forget about AI. It’s anything right, like moving to electric vehicles, for example, like there are mental barriers, and some of those mental barriers are created around us, for premiumization have certain offerings. And that’s why it’s important to really demystify this stuff and just open the hood, as you would say, James..
James: Well, let’s let’s do that. Let’s bring Diego in. And the first thing we want to, we want to start to demystify this and as soon as we mentioned the words, AI, and natural language processing, which we’re now going to apply to doing research on the socials that people are using who might be your clients.
As soon as we put those three things together, AI, and natural language processing and socials. I think people’s minds just go, this is beyond my understanding, you will know it. I’m just going to trust you.
Paul’s telling us we can’t trust anyone.
Paul: So true.
James: So what are we talking about with NLP? What is this?
Diego: It’s the result of the explosion of two universe. Yeah. Computer Science and linguistics.
So the result of this explosion is the natural language process. So just an example. The applications for natural language process, as Paul mentioned, is the sentiment, some classification, but the first application of natural language process was the translation, the computer translation, that example we can find on Google translation. Yeah, was the first application of this field.
James: So and I’m gonna put you on the spot when we start thinking of computers translating natural language translation. How old is that? I mean, that’s not so long ago.
Diego: Yeah, it’s, like, 15 years. They, Google start to working on the translation. The first results on natural language process was 15 years ago. So it’s really new. Yeah, it’s really but about the another applications like the sentiment, the dialect classification and the text prediction, the natural language solutions are really, really new.
Just a quick example, a couple of years ago, the sentiment analysis was based on, search the positive words, and assigned the positive score and the negative words, a negative is constant and then summing up, and the result was positive or negative.
James: So just to, so if I was looking at an Instagram post, I would, the computer would find the positive words, give them a score, find the negative words, give them a score, put the two, add them together, and then say, Okay, this is a positive post. Okay. All right.
Diego: Well, it’s a couple of years ago. Now, the new research analysis. There are and the advance of the computer science, computer and artificial intelligence. Yeah, there are more tools, there are more interesting analysis. So now, we can analyze in a different way. Because the most important thing in this field is the context.Yep. So now Google, especially Google with a Google BERT. Yeah, the solution for natural language analysis and natural language processing that you use on the search engine, called Google BERT, analyze the context of the sentence, the context of the whole text.
James: So how all the words are being used because and this is something we’ve spoken about the fact that given dialect given different languages, you might use words, in a negative way, but mean something positive. And so that would have been missed a couple of years ago.
Diego: Yeah. Correct.
James: And now using this BERT method, this BERT tool, it’s not really a tool. It’s a programming function.
Diego: Google BERT is not only for the sentiment or dialect yeah, you can use in any affinity fields. Yeah. So text classification and text prediction. Where is the next word, for example, when you’re using your mobile, and you’re typing a message? So what is the next word that recommend? Yeah, so those are things are based on natural language processing.
James: So that’s the actual application that we’re seeing every day when I’m typing my message, and I get the finished sentence, and I get to choose, that’s just natural language processing and practice.
Diego: Yeah, correct!
James: An avenue of it. So so what we’re talking about here is the linkage of linguistics, computer science, in natural language processing for search and analysis purposes in what we’re talking about, to try and understand and figure out what the actual sentiment is of these posts.
Diego: Yeah. That’s, that’s right.
James: In general terms, this, okay. Yeah, I get the idea that it’s going through things. But how does it know?
Diego: Yeah, so the first thing that we need to consider is, we need to start with the data set. Here with the whole flux, for example, is the sentiment. So you need a corpus or a text. Yeah, we deflect the sentiment. So we need to train this data. Yeah, in a new data set. So this is the first thing. This is the first point.
James: And training involves a lot of input.
Diego: Yeah. Correct. Saw a couple years ago, the maximum number of rows that we can train, just 1 million. Now Google BERT is 5,000 millions of rows to train. Yeah, a model. So the accuracy, the capacity to cover the 595% of the speech. Yeah, and the conversation dies. So it’s covered now. Because it’s 5,000 millions of rows just in one language. Just this is an example for Arabic. And English is like three billions of rows in text analysis.
James: So, Diego, obviously this has a significant application with the Arabic language which is very broad. But before we go that direction, we’re going to do that in a in a future episode, we’re going to bring our Arabic language specialists in as well. I really do want to back up and talk, have you tell us and share with us your insights on how this whole process fundamentally works. So that and the purpose here is so that the person who’s got to make the decision? Do I go and invest in a service that’s going to be using this AI to help us gain better understanding of audience sentiment, so that they can ask the right questions to the people who are trying to sell them a product?
Diego: Yeah. So in this field, also in analytical sphere, for example, to our prediction, therefore, you need the historical knowledge and historical background, Yeah so, you have to predict the weather today if it will rain or no. So you need to see what is happening yesterday, or this morning or two days ago. So in this case, its similar, but we use text yeah.
James: Let’s contextualize this, take it down to if I’m interested in why someone buys butter. I need to have the historical evidence from why they bought butter in the past. Is that what you’re was that what you’re suggesting?
Diego: Yeah, correct. Is not necessarily exactly butter. But yeah, similar products. Yeah. So or not necessarily buy, could be used? Yeah.
James: So you’ve got buying, So you’ve got data, there’s gonna be a buying, you’ve got another one that can be on using, you got another one that could be on shopping and other one. And it’s not just one specific product, but categories of products.
Diego: Yes correct.
James: So suddenly, as we’re thinking about this, this is exactly what we’ve been talking about enormous amounts of data to not only that are there but are being fed into a system, which then opens up exactly where we’re at now is okay, so how does the system then make sense of all this? Because they’re not all the same things?
Diego: Yeah. Correct. So, the first thing is, okay, we have the data. Yeah, we have the knowledge, we have the background. So next step is predict or classify? Yeah, so sentiment, or you give me a test, or we can have a tweet. So next step is classified sentiment, or negative, in this case, the sentiment or the translation of that sentence, sentence. So that’s the next step. How it works, we trained the models using the knowledge data set, yeah, to try to predict. So if you for example, the recipe you can estimate or predict what is the recipe. So okay, this guy is butter. Mentioned butter, mentioned flour, mentioned x. So that prediction, is a cake or cookies. Yeah. Right? Because the recipe, is the recipe for cookies and cakes use those ingredients. So that’s the way how it works. So if we have the knowledge, base knowledge, so we can predict, estimate and classify. That’s the way how it works.
This is the high level Yeah. So now, here the most important, as mentioned before, is the context. Because could be, the same example the recipe if analyzing and supermarket review about x for and butter could be a review of price, is not a review of a recipe. The context is the most important thing and in this field especially in the natural language.
So a couple years ago, especially in Arabic language, the first analysis, the first research about natural language processes, sentiment, text classify was related with the Quran, just the religious things.
So, classify, if this is good for the community or not good, this is good for your life or not. Yeah, what is our recommendation based on the Quran.
Now, in the real world, we have a ton of views. So the most important as mentioned is the context. So this is one small example. Yeah. So can you imagine two tweets, or two reviews? The first one is, I want to book a table for two and the second one is I couldn’t find the results in that book. In this case, a keyword is ‘book’ could be a noun or could be a verb.
James: But two very different sentences.
Diego: Yeah, correct. This is an example for English that is easy if compared with another language.
So this is one example of the challenge. So here, the most important, is analyze the context.
The first one I want to book a table for two, it’s related to a review. Now, restaurant review and the second one could be review in a bookstore or a common student in a library. So many options, the most important in the context. The new tools or the new frameworks. Like BERT, because we use framework, you can use BERT, to create the models there and create extra layers for each field.
The most important thing is analyze the context. So Google BERT offered that the bi directional analysis. So he analyzes what is happening before the keyword and after the keyword.
James: This is important, because BERT, the B and BERT stands for bi directional and before this type of natural language processing transformer was being used. If you’re just using one direction analysis, as we’re seeing with bidirectional looking at what’s in that keyword before and after, if you’re not doing that, then you have a lot more error.
Diego: Yeah, using bi directional solutions. You can ensure the context, the grammar sometimes is important, especially and the text prediction. Yeah, what is the next word is a verb, or a noun. Now, yeah, that makes sense. The statement, so this is a good option, Google offer a good option to analyze text.
James: And when we’re looking at this bi directional text analysis, it’s always going to do it. I mean, it’s learned it has a pool of of data that it’s comparing against to make the decision. So it’s always going to do it the same way, and learn as it’s going so that it becomes better at it, which takes us back to Paul Hall and when we were talking about the whole idea that yes, I could have four or five people working in my socialist department, and four or five people looking at responses and comments. But if I’ve got, you know, one Canadian, one Australian, one Brit, one Syrian, a Colombian, and maybe someone from you know, Eritrea, we’re going to have all sorts of different understandings of that sentence, what the words might be used, whether we’re going bi directional or not. The potential for error to me, sounds much greater in that context and I don’t know how much we can actually look at because you know I’m gonna need a smoke break, going to need a coffee break, gonna need a lunch break or gonna need a vacation. This process just keeps doing it.
Paul: Yeah, I think what tools like this enable is, is, as we’ve discussed, I think, multiple times is the the ability to look much more than a human could ever do, and with less error. And the really interesting thing with all of this is that we’ve talked about this before, not conflating data with insight..
James: Which is, I mean, I constantly do that. I constantly take the data and say, I’ve got insight, but I don’t have an observation.
Paul: Yeah, and that’s different. An observation is different to an insight. An insight, you know, helps us understand behavior or uncovers a truth about behavior or patterns or things like that and humans are still probably equipped the best to do that because there is nuance and we’ve talked about some of that in what Diego is, and the machines can sort that out, but it still requires somebody to use and utilize that data and that’s what is so powerful about this is it enables, yes, some prediction about the future. But there’s still a really strong role for people to do research and people who do research to apply creative thinking.
James: And that’s what I’m interested in where we’re going with this with Diego is where does this leave me with the data and the ability to make decisions and and play with insight?
Diego: This covered many, many things. Not only sentiment, selection dialect, yeah to cover, like emotions. Yeah. Discover feelings. Yeah and discover preference that you have. Yeah. Based in the way what you have right, or do you speak? Yeah. So this is like an umbrella and the different applications so we can use that to create a strategy is to create the next step. What is the next step? Yeah. So just an example. And just in the Arabic word. Yeah. We have an Arabic word has 17 dialect. Yeah, main dialects for each.
And one is Standard Arabic, more any standard Arabic that you can find in the schools and books on the Quran, and the newspaper. But just 30% of the ads use more than a standard Arabic. Yeah. What happened with the RS 17? In the Arab world, so they use their own dialect. So for the reason is, its important to know what happened with the dialect? Yeah. So the books also, now the books are right in the end, each dialect, so that’s, yeah, amazing, is just not a Standard Arabic or the basic Arabic and people are using the dialect. You can find and tweets, post ads. Yeah, so you need that insight, you need this information. Yeah, to do the next step. So these things, this solution, linguistic and computer science, offer that solution. Offer a deck of solutions!
James: This is amazing. Because I’ve got this tool now that is going to give me an enormous amount of data but that focus can be focused on sentiment that can be focused on dialect.
Paul: And also practical applications. I think when we talked about first getting away from I guess even just our specific use case, I think what’s important is people use this stuff every day and not to be confused by it, like getting right back to that. It’s a confusing topic. Obviously. It’s, it’s, it’s one you need to apply technical training to achieve it. I’m not saying it’s open to everyone, but understanding it, is open to people at least the basics.
James: But I think the general audience as soon as you mentioned predictive tax, that kind of go okay, I get it. Now, there’s the complexity of how it works. And the buyers are okay with and, you know, that’s why we have Diego, because I understand the nuance of it, you’re gonna you’re gonna make it work.
But what this, the bell that goes off in my head is I want to be using these tools. With NLP, I suddenly have a lot more data that I can look at. But I also have a lot more data that is very focused, which means I have an enormous potential for data analytics at this point. And so the great thing is, I’ve got more, the bad thing is, I’ve got more, which is good and bad at the same time.
Paul: Yeah, and I think, I think a great application of the thinking towards this might be a completely separate field where there’s often a phrase, you know, past performance isn’t an indicator of future return. But actually 90% of the time it is, if you look at like things like the stock market, for example, more information over time from the past gives us a great view of what the future may hold with a certain degree of probability and that’s what this is. That’s what this enables us to do. Going right back to your specific example of butter again, it helps us understand based on past performance, what might happen in the future, and when you sort of can then unlock that you’re not just looking at an oil price or a stock market index, you’re looking at everyday behavior, everyday feelings, everyday happiness, you can then accurately predict maybe what might happen in certain events in the future. And if you’re a business, you can, you can then make plans to counteract that or to take advantage of that or to do whatever you need to do with that information and that’s where thinking about things simply and then applying technical knowledge and then working with people who know what they’re doing like Diego really helps.
James: It unlocks as you just said it’s an unlocking tool. Yeah, it’s that simple. Yeah. It’s a lock.
I feel like my whole mind is just gone. I get it. I mean, do I get the, do I get the technicalities of how NLP works? No, I just need to understand that it, this is what it does. It’s an unlocker. And going back to previous episodes, there’s a whole spectrum of these AI tools that are out there from very rudimentary, very entry level things that I might be able to afford to get access to, to things that are far more complex and, and a little bit more expensive that larger organizations can get access to, but you want to get access to it because it’s going to unlock the data for you to make better decisions through insight.
Paul: Yep.
James: It’s that simple.
Paul: It is.
James: Know your audience.
Paul: Know it well.
James: Diego, thank you very much for joining us.
Diego: Thank you.
James: It’s been a lot of fun.
James: I’m James Piecowye.
Paul: I’m Paul Kelly. And this is ‘Know your audience’.