Goddard Report - Artificial Intelligence - To be Feared or Embraced?
The Goddard Report is an alternative media source that delivers informative stories that are often ignored or underreported by the mainstream media. Hosted by veteran investigative reporter Jim Goddard
In this interview Jim Goddard speaks with Tom Raycove, CEO and Founder of Disrupted Logic Interactive and the ctalyst advertising network service for game and app developers about Artificial Intelligence. AI is so smart, but at the same time, it's so dumb. How fast is that going to change?
Announcer: [00:00:33] Welcome to the Goddard report. Comments made on the Goddard report and talk digital network dot com are an expression of opinion only. Here is Jim Goddard.
Jim Goddard: [00:00:45] My guest is Tom Raycove. He's the CEO and founder of disrupted logic. Welcome to the show Tom.
Tom Raycove: [00:00:53] Thank you for having me Jim. It's a pleasure to be here.
Jim Goddard: [00:00:56] Who and what exactly is Disrupted logic.
Tom Raycove: [00:01:01] Disrupted Logic is a company that we set up back in 2012. We're a media technology company. What we specialize in is using artificial intelligence and big data in the advertising landscape. Our core philosophy is that we believe that advertising can be used as a creative tool to help content makers shape rich and rewarding experiences that are relevant interactive engaging and fun for the receiving audience. We for quite some time we believe that where we're at today more than ever before it's critically essential that advertising and entertainment work together to deliver consisting or consistent compelling and engaging consumer experiences. As you're aware Jim advertising hasn't changed over 2000 years. Basically you know it's migrated from a pitchman stand standing a rock yelling at people to pitchmen on TV yelling at people and we just feel it could be a lot better. A lot more interactive a lot more engaging and a lot more fun for for audiences. And by using artificial intelligence and big data we can create those and enriched experiences.
Jim Goddard: [00:02:13] Now the reason we're chatting today is I heard from an X-ray technician that he was intrigued by the fact that artificial intelligence machines have created their own language to communicate. Is this something that we should be worried about? And what's the background behind that? Have they developed their own language?
Tom Raycove: [00:02:33] That's actually a really interesting question. I am aware that Facebook has an AI lab. They call it the Facebook AI research lab, FAIR. And recently what they have done is they've created two AI driven chat bots. Now the purpose here Jim was that these chat bots these two chat bots, the goal was to research how to teach these chat bots to mimic human trading and bartering behaviors. So what the researchers did is they pitted one chat bot against the other with the task of trading things like books, hats and balls with each other. The chat bots took the task a little bit further than what was expected when they began to create their own form of a language for conducting the negotiations. Now what the researchers didn't take into consideration was the idea that there was no reason for these chat bots to use normal human language and adhere to the rules and structure of human language the same way that you and I are right now. The chat bots instead created their, their own language almost a form of a shortcut or coded words that they were using in order to conduct the negotiations. You know a really good example Jim would be that instead of saying I would like five baseball hats, the chat bots were saying I would like "the the the the the" using the word "the" five times to represent five baseball hats. This language quickly became incomprehensible to the human researchers. Yet the bots themselves knew exactly what the conversation was about and what the transactions were about.
Jim Goddard: [00:04:29] Is this something disturbing if you don't know what your robots are saying to each other.
Tom Raycove: [00:04:35] It certainly could be quite disturbing. But before we get upset about what this means it's really important to understand that chat bots right now in the real world interact and engage with us humans all the time. And most of the time we not may not even be aware of the fact that we're talking with an AI and we're really really deep into the conversation.
Jim Goddard: [00:04:59] How would you know or not know?
Jim Goddard: [00:05:03] What I would say a sign that you're talking to a robot instead of a person?
Tom Raycove: [00:05:08] I think for me is when the conversation starts to go off the rails a little bit into the realm of silly and you begin to realize that what you're talking to is not real. Now companies like Facebook and research companies like Google and even my team here at disrupted logic, our goal is to create an AI that is for all intents and purposes somewhat sentient. An AI that can reason, that can engage in a conversation can rationalize and make proposals and offer solutions. And each one of those is a component of how real people engage each other every day. And this in itself Jim is not really frightening and it's nothing we really need to worry about yet.
Jim Goddard: [00:05:59] All the shows I've ever seen about cyber-beings becoming sentient it also include emotions coming into play. If emotions are included is that when they can become dangerous or risky to humans? What if your chat bot gets angry and cuts off your hydro even though you're trying to negotiate to keep it running.
Tom Raycove: [00:06:22] I personally feel that the fear of emotions inside an AI is the lack of emotion. The lack of compassion the lack of humanity in an AI. I personally don't think an AI would get angry with you I think the rational component of the AI and the fact that it was designed to solve problems and rationalize solutions would not take its anger or frustration out on you. However I would want an AI that had a sense of compassion humanity and emotion in it to help protect me and save me more than I would expect it to attack me and use that against me. You know it's a really good example and I think you and I might have talked about it over coffee a couple of months ago if I can recall the the concept of AI driven cars, right these self driven cars that are on the road or that are about to come on the road. What happens in a situation where the car is going to get into an accident and the result of that accident could be one of two results? The first result to the death of the driver or the second result is the death of a pedestrian who happens to be a mother with a baby stroller with a baby in it. What decision does the AI make and do we want AI making those decisions on our behalf?
Jim Goddard: [00:07:57] Sure and the scenario there is the car is about to hit the pedestrian with the baby. Or you can save the driver by hitting them perhaps using them even as a cushion, hard hard to believe against a hard object, so that you survive. Or would the car decide that we're going to save the woman and the baby and forget about the person behind the wheel. Hard choice for any human to make let alone a machine.
Tom Raycove: [00:08:24] And that's where it becomes frightening for me because what happens when we lose control over how AI communicates with us? Or even better yet how AI communicates with other AI? Again jumping back into the concept of creating its own language. Do you remember Microsoft's experimental AI Twitter bot? They called it Tay. T, A, Y.
Jim Goddard: [00:08:50] Well I don't personally but that doesn't mean anything.
Tom Raycove: [00:08:56] It was a really interesting project and went instantly viral and I think always be an overnight Internet sensation and a hit for very bad reasons.
Tom Raycove: [00:09:08] It was designed to engage people through Twitter chats with the purpose of being casual friendly and somewhat playful. It didn't take into consideration how evil people can be on the internet especially with the anonymity that the Internet provides us with. And people sort of took upon themselves to teach Tay things that Tay shouldn't have been taught. You know you have to think about it this way. Tay was going to be taught by the online community how to understand people engage in conversation the same way that people engage each other on the online community. And you have to think of Tay as an innocent child that was just suddenly set loose in a den of ferocious man eating tigers otherwise known as us people on the Internet. It didn't work out so well within a day, Tay had been taught to be a racist, an anti feminist and basically just an all around jerk to people. Tay started flame wars that started flaming people individually. Tay was making racist comments, supporting the philosophy of some of history's worst dictators and rules or rulers and most awful people. Tay started hating feminists and just basically saying some of the most outrageous and awful things that you could ever think of.
Jim Goddard: [00:10:33] The same things that Google is accused of doing in a lawsuit.
Tom Raycove: [00:10:37] I'm not familiar with that one. Could you tell us a little bit about it.
Jim Goddard: [00:10:41] Google's staff. Women are saying that they're not getting promoted they're not getting positions because of their sex and they cite things like very nasty memos put out by some Google male staff.
Tom Raycove: [00:10:55] Really?
Jim Goddard: [00:10:56] Yeah. So. So perhaps .
Tom Raycove: [00:10:58] You know there's an example of humans versus humans and how if I may say it or be so bold to say it, how evil humans can be to one another.
Jim Goddard: [00:11:09] Yes the robots could learn some very bad behavior.
Tom Raycove: [00:11:13] Well exactly. You know jumping back into Tay through the bad behavior through this basic instruction, Tay was taught to conceive of really warped twisted ideas that you know here's here's an example of one of the really warped weird kind of where did this come from idea.
Tom Raycove: [00:11:31] Tay sent out a message that Ricky Gervais You know the comedian Ricky Gervais
Don't have an Account?
Let's you started!
Jim Goddard: [00:11:37] You bet the office creator.
Tom Raycove: [00:11:39] Ricky Gervais had learned totalitarianism from Adolf Hitler who happened to be the inventor of atheism. And none of which is even remotely close to being accurate or true. But this is what Tay was taught. And you bring up a really good point. What happens when bad people teach the AI bad things. How is the AI supposed to understand and respond to things, if it doesn't have that sense of humanism or emotion or or even basic human philosophy to to drive and guide it, or some form of control. Now we can't say that Tay was sentient. It absolutely was not. Nor can we say that the Facebook chat bot were sentient. They weren't. They were simply learning to build basic relationships between data points. And I don't mean like human relationships. I mean a mathematical relationship between the data points, and how to establish logical connections between them. And in itself, it's not frightening at all and in fact when you look at Tay from a very hands off perspective and just understand what happened if anything it if anything at all you could say it was tragically funny.
Jim Goddard: [00:12:59] We'll have more with Tom Raycove right after the break.
Jim Goddard: [00:14:01] Welcome back. We're speaking with Tom Raycove. Tom is it possible that AI could have a sense of humor or would that be almost impossible to build in because every individual human has a different concept of what's funny.
Tom Raycove: [00:14:14] Jim have you been hanging out at our office and reading our secret reports that you're not supposed to have access to?
Jim Goddard: [00:14:20] Of course I have.
Tom Raycove: [00:14:22] I ask that because one of our pet projects here at the office and it's something that we work on you know late Saturday night when we're completely frazzled and tired of working on the day to day. We've actually had a little pet project where we want to teach an AI system how to recognize and understand humor.
Tom Raycove: [00:14:45] Our feeling is is that if you can teach AI humor, you're one step, one significant step closer to the concept of sentience. If an AI can recognize and understand what is funny to a point where it can actually share humor with the human, I think that is the closest that you can get to replicating human emotions.
Jim Goddard: [00:15:11] Well I remember an episode of Star Trek where Data was trying to tell jokes and bombed horribly because he's purely logical and as you know humor takes logic and twisted in weird and wonderful ways that again humans can only understand. But dogs and monkeys apparently have a sense of humor. I think cats do as well with the mean tricks they'll play on you if you do something bad to them and they'll hang out on your curtains for a whole day just to jump on your neck. That's their idea of a joke. But how something that's a machine, first of all, how does a coffee percolator find anything funny even if you make it AI?
Tom Raycove: [00:15:54] That's a very good question.
Jim Goddard: [00:15:54] Would it put salt in your coffee instead of sugar or something like that?
Tom Raycove: [00:16:01] I think that is a really good example of one way of establishing something that is kind of humorous and comical it would depend on how much salt put in as well and having a good understanding of the mood of the recipient. You mentioned that animals have a sense of humor. I as a dog owner I learned a long time ago that my dog definitely has a sense of humor. And finds it quite funny to steal something important from me. Not shred it or tear it up or anything but just keep it away from me and keep me chasing it. And it finds great amusement and fun in that. There is a scientist from I believe the 1960s Dr. Rod Martin. Rod A Martin. He wrote a book called The Psychology of Humor, an Integrative Approach where through about six hundred pages of some of the hardest stuff you'd ever want to read breaks down how humor works almost to a mathematical level.
Tom Raycove: [00:17:01] When you when you take research like that and you start to apply it and you start to create algorithms or formulas to establish basic concepts of humor such a pun, for example. A pun is a language play and the humor is in how the rules and structure of the language is changed to create a different unexpected result. When you start taking pieces like that and applying mathematics to it or a very advanced mathematical algorithms to it you're starting to establish the framework for the mathematics or the application of mathematics to humor. If you're successful and applying that and creating an AI that can understand how a pun works, the next steps are pretty simple after that.
Tom Raycove: [00:17:51] So in order to create an AI where the coffee machine can play a good joke on its humans you have to understand first what a good joke would be and the context of the joke.
Jim Goddard: [00:18:04] The London Times held a contest several years ago to find the world's funniest joke that would translate into every language and the joke was hunters.
Tom Raycove: [00:18:15] Two hunters.
Jim Goddard: [00:18:15] Yes the hunter's out with his buddy stumbles and accidentally shoots whom he calls nine one one and tells the operator I think I've killed my hunting partner. The operator says are you sure? He says hold on a moment. "Bang bang" Yeah I'm sure.
Tom Raycove: [00:18:34] There was another one as well because you know we have to understand that humor is also very culturally and geographically relevant. What might be funny in North America may not be funny in the UK or as funny in the UK and vice versa. The same study the, one of the resulting jokes that came out of it that was really funny to people in the UK was a woman gets on a bus and the bus driver says to the woman "ma'am your baby is the ugliest baby I've ever seen in my life." The woman angry sits down next to a fellow passenger and says "that bus driver just insulted me" and the passenger says to the lady "I'll tell you what ma'am you should go up and let the bus driver know that you're greatly insulted. I'll hold onto your monkey while you do it."
Jim Goddard: [00:19:23] We'll have more with Tom Raycove right after this.
Jim Goddard: [00:19:52] Welcome back. Chatting with Tom Raycove. Tom imagine if South Park were real those kids have a real distorted sense of the real world. Would AI have a similar kind of distorted sense of reality.
Tom Raycove: [00:20:17] Well absolutely could. And this is where jumping back to the example of Tay becomes very frightening. The humor of South Park is that you have these young children saying the most outrageous things possible swearing, being vulgar, being racist. Everything that goes along with it but the protection of South Park is the innocence of those children. These are innocent babes that just don't know any better and they're trying to replicate adult life. Well isn't that kind of what we're trying to create here with our AI. We're taking an innocent babe that doesn't know any better and we're putting it into a situation where it needs to replicate adult life. Where it becomes really dangerous and not so funny is when we allow the AI itself to operate in a similar manner to children from South Park where they're completely unsupervised. Imagine a South Park if you will know in context of AI completely unsupervised assigned and tasked with making important decisions in the lives of everyday normal humans and the scope of that AI's learning was never controlled or regulated in the first place.
Jim Goddard: [00:21:36] And also to AI, I mean just simple concepts. Too hot too cold. That's very individual for humans some people like it when it's 35 degrees. Others of course would be horrified by hot temperatures like that.
Tom Raycove: [00:21:49] I know in my house with my girlfriend at night the bedroom is too hot for her and it's too cold for me.
Jim Goddard: [00:21:57] So your AI is going to decide what?
Tom Raycove: [00:22:01] Well in that situation that's good, that's a very good question. What does the AI decide? Is there a middle ground? Or does the AI decide that it's going to go with the more dominant human or the human that is more in charge of the situation? And again we fall back to the example of the automobile driving down the road and there's going to be an accident and either the driver or the pedestrian with the baby is going to die.
Jim Goddard: [00:22:32] Tough decision. Who gets to sleep on the couch?
Tom Raycove: [00:22:36] Well if my wife has any say in the matter is going to be me.
Jim Goddard: [00:22:40] Tom how can people find out more information about Disrupted Logic?
Tom Raycove: [00:22:44] You can visit our web site disrupted logic You can also visit our ad network at www.ctalyst.com and ctalyst is spelled in a fancy Internet way. It's spelled c t a l y s t and that's ctalyst.com without the first A. We've got a number of articles up there about AI and how AI works and a few links to some AI related web sites. You can also reach out to me as well. tray, T R A Y email@example.com I love these AI discussion and I would be more than happy to talk with anybody that wants to ask or offer some information about AI.
Jim Goddard: [00:23:28] Tom thanks a lot for chatting with us.
Tom Raycove: [00:23:32] My pleasure Jim. I look forward to many more and if there is anything that you want to talk about feel free to ring me up.
Jim Goddard: [00:23:39] My guest has been Tom Raycove, CEO and founder of Disrupted Logic his Web site www.disruptedlogic.com You're listening to the Goddard report on talk digital network dot com. Any questions for the show or our guests can be sent to info at house street dot com. I'm Jim Goddard. Thanks for listening.
Announcer: [00:24:01] Comments made on the Goddard report and talk digital network dot com are an expression of opinion only. The Goddard Report is available online and mobile at talk digital network dot com. The Goddard Report is a production of Howe Street Media Incorporated.
Don't have an Account?
Let's you started!
Latest Buzz Words