Weds 13 April – ‘The future for artificial intelligence?’ – introduced by Noush
There was recently a story in the news about Tay, an artificial intelligence created by Microsoft to interact via Twitter, and to learn from conversations with people:
Unfortunately what Tay learnt wasn’t very pleasant and it had to be taken offline after tweeting racist and sexist remarks.
Is this the result of the ‘online disinhibition effect’, where people act differently as a result of perceived anonymity? Did people decide that because Tay wasn’t ‘real’, they didn’t need to interact with it in the same way as they would with other people? Does this mean that future artificial intelligence will need to have limits placed on their ability to learn?