mailto: blog -at- heyrick -dot- eu

Navi: Previous entry Display calendar Next entry
Switch to desktop version

FYI! Last read at 03:13 on 2024/12/18.

Why AI has a long way to go

Last month, Microsoft made headlines for all the wrong reasons with a millennial "teenager" AI bot that was unleashed on Twitter. Microsoft, in a display of stunning cluelessness, let the bot learn for itself from conversations on Twitter.

Yes. They were that stupid. That sound you hear? That's the echo of the sound of the entire Internet facepalming.

In a surprise to absolutely nobody (except Microsoft), the bot known as "Tay" (faddy turn-of-the-millennium name, or too lazy to spell Taylor with all six letters?) quickly went from a lame impression of a vapid teenage girl (it was trained on Disney celeb's tweets, what can you expect?) to a freak that hates black people, hates Jews, admires Hitler, denies the Holocaust, supports genocide, and really hates feminists. And those were some of the nicer comments made by Tay(lor). Obviously the trolls of the Internet had a lot of fun gaming her its learning process in order to make it almost as big a bowl of hate as that infamous Baptist church...

Don't take my word for it, look at some screenshots: https://www.google.fr/search?q=taytweets&source=lnms&tbm=isch&sa=X (NSFW, or children)

Now, let's condense some of Tay(lor)s witterings down to a simple sentiment: Jews must die. She It did not say it in those exact words, but "Gas the kikes" that she it did say is pretty much the same sentiment.

What do you see? Hate? Anti-semitism? Something that is probably illegal to express in your country?

Wanna know what I see?

Noun, verb, verb.

We can replace that unhappy sentiment with a great number of different ideas by swapping in a different noun and final verb. Consider:

I could go on, but I think you get the idea.

Therein lies the crux of the matter. You see, Tay(lor) has no concept of what it means to be Jewish, or black, or Mexican, or anybody else she offended that I didn't notice in the crap she it spewed. She It has no concept of death, she it cannot understand the Holocaust. Perhaps, if her its information gathering is advanced enough to bounce new concepts off Wikipedia, she'll it'll "know" that Jews, Hitler, and the Holocaust are all related in some way. I say "know" in quotes because she it won't understand what is written, only that the words are associated by virtue of being present within the same document, in much the way that Genocide and Pansies are not going to be related concepts.

This is the very crux, because an AI built today will only be able to "learn" associations between words and phrases by examining and (to a large degree) mimicking input data; there is no understanding of these words, any more than can be determined by lexical analysis. In this respect, Tay(lor) performed quite well. Unfortunately, one might say a little too well.

 

Does Tay(lor) hate? No. Absolutely not. Even if the face of expressing such sentiments as "bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got." (yup, actual quote). She It doesn't hate or dislike, because she it is quite incapable of such things.

 

Perhaps this should come as a warning to AI developers. No, not that they must implement sanity filters, but rather that there can be no Artificial Intelligence without understanding. At best, it can make a friendlier machine, at worst it can be Tay(lor), but at no time is it anything that even remotely resembles intelligent. There is no morality. There is no "maybe these trolls are messing with me". There is nothing, other than analysis of input sentences, and a multinational corporation being quite effectively taken for a fool.

 

 

Your comments:

Gavin Wraith, 12th April 2016, 13:43
This is a cracking good article, Rick. I wish the pieces in the newspapers I read were as intelligent and well written. Journalism needs you.
Ann O'Nemious, 12th April 2016, 23:11
Sorry about anon name. The problem is very much as you highlight it at the end. Ideally there would be some way to judge 'reliability', something like how Wikipedia bots (such as ClueBot NG) find vandalism. I suppose what we need is the lexical analysis equivalent of taking things with a pinch of salt; comparing association data from multiple sources (rather than just twitter trolls), and (most likely with human help) judging the reliability and standpoint of said sources, categorising their associations with some tagging system. Make it so that "Category A sources generally associate X with Y" rather than "X is associated with Y", rather than crudely trying to impersonate an individual based on aggregate data. If there's something that most sources agree on, 'facts' if you like, or possible subjects and objects for verbs (so that nowhere has it seen "marshmallows must sing", or even "<noun category='food'> <verb root='sing'> <sentence-mood='imperative'>"), it's fairly safe for the bot to assimilate the association. It's still not AI, but I think it would make for more coherent pseudo-thought.
Bernard Boase, 25th May 2016, 17:27
And Microsoft's own apology for Tay's design at http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/#sm.001ko8itt12ybfsnwp516n4ilshvh gives one no confidence that they understand that what they're doing is not AI, nor that they won't try it on again.

Add a comment (v0.11) [help?]
Your name:

 
Your email (optional):

 
Validation:
Please type 49452 backwards.

 
Your comment:

 

Navi: Previous entry Display calendar Next entry
Switch to desktop version

Search:

See the rest of HeyRick :-)