更新时间: 试题数量: 购买人数: 提供作者:

有效期: 个月

章节介绍: 共有个章节

收藏
搜索
题库预览
Imagine a flying saucer (外星人驾驶的飞行器) lands in Time Square and an alien steps out carrying the game of Go. He walks up the nearest person and says, “Take me to your best player.” Now, let’s assume that the alien spent years studying how humans play Go, watching replays of every major match. If that was the situation, it would seem humanity was being set up for an unfair challenge. After all, the alien had the opportunity to thoroughly prepare for playing humans, while the humans had no opportunity to prepare for playing aliens. The humans would likely lose. And that’s exactly what happened when an “alien intelligence” named AlphaGo played the human Go master, Lee Sedol. The human lost in 4 out of 5 games. But, if we look at the big picture, it wasn’t a fair match. Still the media went wild, calling the victory a historic milestone in A.I. research, an unexpected leap that took the scientific community by surprise. I agree completely, but not because it demonstrated that A.I. is good at playing the game of Go. No, this victory demonstrated that A.I. is good at playing the game of humans. After all, AlphaGo didn’t learn to play by studying the rules and thinking up a clever strategy. It learned by studying how people play, processing thousands upon thousands of matches to understand how masters make moves — how they react to moves — what mistakes they’re likely to make — what moves they’re likely to miss. The system trained by reviewing 30 million moves by expert players. Thus, AlphaGo is a system that makes beating humans as effective as possible by studying us inside and out, learning to predict what actions we’ll take, what reactions we’ll have, and what errors we’ll make. This suggests a future where we humans are easily controlled by intelligent systems that can predict our tendencies, our preferences, our actions and reactions, all while finding our weaknesses and exploiting them. That is what this AlphaGo milestone really means. And we should all be very concerned. According to published research, the AlphaGo system was able to correctly predict what move a human will make 57% of the time. Imagine if you could correctly predict what a person would do 57 percent of the time — maybe while negotiating a business deal or selling a product.
Over the past year big names in technology and science like Elon Musk and Stephen Hawking have warned that the threat of artificial intelligence gone bad might be more than just science fiction. In tweets and multiple public appearances, Musk has compared the dark potential of unregulated artificial intelligence to spelling the end of mankind. On Thursday, the SpaceX and Tesla Motors head donated $10 million to the Future of Life Institute for the creation of a grant program that will look into how to keep AI friendly towards humans. Like most things Musk says and does, there’s an aspect of salesmanship to be found when reading between the lines. Musk has invested in two major AI firms, Vicarious and DeepMind Technologies, the latter of which was acquired by Google. Look who else is quoted in the news report announcing Musk’s donation: “Dramatic advances in artificial intelligence are opening up a range of exciting new applications,” said Demis Hassabis, Shane Legg and Mustafa Suleyman, co-founders of DeepMind Technologies. “With these newfound powers comes increased responsibility. Elon’s large donation will support researchers as they investigate the safe and ethical use of artificial intelligence, laying foundations that will have far-reaching societal impacts as these technologies continue to progress.” Musk is a person who has futuristic visions and makes big gambles on creating whole new sectors that have the potential to move humanity forward, arguably, but he’s also a capitalist who knows that the potential rewards are in line with the magnitude of the risk. For all his talk about the dangers of AI, which is supported by others like Hawking and Nick Bostrom in his recent book on the topic, Superintelligence, Musk seems to have decided to be outspoken on the issue and to make this “donation” as a pre-emptive (先发制人的) strike against negative public opinion, a potential obstacle for AI on its journey towards maturity (成熟) and profitability. In other words, for Musk, it’s about protecting an investment as much as it is about protecting humanity from mean-spirited machines. After all, Musk has said that Tesla will be the first to market with self-driving vehicles. That’s a low-level of artificial intelligence compared with superintelligence, but Musk still has to ensure that we’ll be comfortable riding around in “smart” cars as AI develops further.【缺少答案,请补充】
1