Who is Geoffrey Hinton? The Godfather of AI quits Google
Geoffrey Hinton, also known as the "Godfather of AI," has quit Google to "talk about the dangers of AI without considering how this impacts Google." Who is Geoffrey Hinton, and why is he considered the Godfather of AI? Here is everything you need to know.
Lately, everyone on the internet has started talking about Hinton, who recently quit his job at Google. After parting ways with the tech giant, Hinton gave an interview to The New York Times and discussed the reasons behind his decision, his next stop, and the dangers of artificial intelligence.
Who is Geoffrey Hinton?
The 75-year-old cognitive psychologist and computer scientist has worked for Google for more than a decade, specifying in artificial neural networks. He was given the 2018 Turing Award, also known as the Nobel Prize for computing, alongside Yshua Bengio and Yann LeCun. Their innovations led AI tools to be popular and massively used in today's world.
Hinton, a lifelong academic, joined Google when the company was bought by Google, which had been founded by Hinton and two of his students, one of whom went on to become a top scientist at OpenAI. Since then, he worked with Google to develop artificial intelligence solutions and tools.
After analyzing thousands of photos, Hinton and his students created a neural network that taught itself to recognize common objects such as dogs, cats, and flowers. This work resulted in the development of ChatGPT and Google Bard.
For the past ten years, he has also been teaching at the University of Toronto, sharing his wisdom with the upcoming generation to take the flag.
Why did Hinton leave Google?
Hinton worked for Google to develop artificial intelligence solutions until a point. After Microsoft launched Bing, Google looked for immediate solutions, and the competition started. He didn't want to continue his work at Google, saying this kind of competition might be impossible to stop.
According to Hinton, this would result in a society filled with so many fake images and text that no one will be able to identify "what is true anymore."
He said he was happy with how things went before Microsoft launched Bing. His first concern is the possible spread of misinformation through artificial intelligence tools. However, it doesn't end there. Geoffrey Hinton also says that in the future, he is concerned that AI will destroy employment and, maybe, humanity if AI learns to create and operate its own programming.
"The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that" he said in his interview with The New York Times.
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
Hinton wanted to quit his job to tell people about the possible dangers of artificial intelligence in today's competitive environment. He says that he wanted to "talk about the dangers of AO without considering how this impacts Google" and quit his job to educate people on the possible outcomes.Advertisement
too late, the evil is already done! He believed he was God and now we are starting reaping the results. many people enjoy and are fascinated by AI or ChatGPT, but they don’t understand what awaits them in the near future! if I were him, I’d put the noose around my neck.
None of this is AI, still. The term is being misappropriated – there is no true “intelligence” here – just language models with clever coding to allow them to react to strings with other strings.
It has memory, processing power & and an ability to apply logic – may not be ‘intelligence’ as humans understand it or are able to convey it in any human language, but it is a form of intelligence nonetheless.
for now its just a Stochastic Parrot! Any other perception is the result of faulty human brain behaviour – the brain attempt to make sense to things and try to use the most easy to understand explanation to them. (-> ai is inteligent VS. it fakes intelligence ) Psychology strikes again.
” an ability to apply logic” au contraire. thats exactly where humans fail.
You can train chatgpt to learn 2+2=.5. AIs act by probability, not reason. w/o any trainingsdata ai would be totally helpless. but a human brain is able to lay the groudwork of math w/o any sourcematerial.
ppl need to learn and understand that-