Elon Musk has a Love - Hate relationship with AI

Elon Musk has a Love-Hate relationship with AI

Table of Contents

Elon Musk has sent a warning over Artificial Intelligence (Image: GETTY)

Elon Musk is scared of AI

Musk’s love-hate relationship stems from his fears of the misuse of AI. SpaceX’s CEO’s fears have been echoed by several great minds in the field like Stephen Hawking, Ray Kurzweil, and Bill Gates. “Technoking” of Tesla has spoken on several occasions to discuss the possible evils of AI.

Elon Musk artificial intelligence: AI warning
Elon Musk: The SpaceX CEO has warned of artificial intelligence outsmarting humans (Image: GETTY)

Back in 2020, Musk said that within five years or less AI will be tremendously more intelligent than the humans who invented it. But, there’s no need to panic yet.

Musk continued saying, “that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird,” CEO of SpaceX shared in an interview with the New York Times. 

With that said, Elon does not believe that AI is an inherently harmful and he surely would never avoid it. In reality, all of Elon’s companies have incorporated AI into their system.

However, Musk is worried on how Ai will be used. AI can cause practical issues of all kinds, unemployment for example. 

Musk wants to ensure AI is being developed responsibly with caution. So, if governments are slacking, he will do the job.  Open AI founder has spent most of his precious time and resources into promoting responsible AI development.

Nonetheless, Elon Musk is working on Neuralink, a technology that would give us a tiny advantage in case an AI apocalypse happens. 

Neuralink in the Humans vs. AI Battle

Elon Musk’s company Neuralink will be our main defense facing AI. Neuralink aims to maximize human abilities and intelligence.

Image: The Quint

Neuralink is an AI device that will be surgically implanted into your brain by a surgical robot. The device will enable you to connect with machines and even control them using only your thoughts.

Neuralink first steps will allow the device to read your thoughts based on the electrical signals in your brain. This permits users to control basic devices and maybe even type with their mind. 

However, the initial phase will focus on creating medical miracles with AI. Neuralink will help paraplegics with simple tasks so they can live more independently. Mainly, the initial phase will focus on tasks that involve using a phone or a computer telepathically. 

CEO Elon Musk added that Neuralink will repair eyesight even if a patient has a completely damaged optic nerve. AI will eventually heal any neurological injury in the brain or spinal cord. Neuralink might even be able to treat epilepsy. 

For the short term goal, Neuralink is hoping to cure memory loss, speech impairment, blindness and paralysis.

Neuralink has a long way to go still, but for now it’s the best use of AI we can ask for. Hopefully, human trials will begin by the end of 2021. However, Neuralink has already chipped a monkey to play video games.

OpenAI the AI Police

Open AI is Elon’s initiative to stop AI from running too wild. The company was founded to ensure responsible development of AI. Open AI, which serves as the AI police was founded in 2015 as a non-profit to create artificial general intelligence (AGI) which is both safe and favorable to humanity.

Elon Musk warns UN to ban “killer robots” Image source: Teslarati

Artificial general intelligence (AGI) refers to a highly autonomous system that would outperform humans and serves a good purpose to humanity.  OpenAI tries its best to keep AGI development safe while taking all AI companies into consideration to achieve a unified outcome.

However, Musk gave up his board seat to avoid conflicts of interests as his companies are working on AI development as well. He can’t be his own police. But, Musk remains a loyal and generous donor to Open AI.

Despite never completely abandoning Open AI, Musk tweeted that he no longer agrees with  all that Open AI is trying to do. 

Perhaps the most controversial point regarding OpenAi is a research paper on a new AI that generates realistic text snippets based on predictions. The danger of this research is it might be used to spread disinformation across the web. Fortunately enough, OpenAI didn’t release the final model to the general public.

 Regardless, OpenAI research is vital to the safe and steady growth of AI. Most of the company’s research is nowhere near harmful other than a few glitches here and there. 

Share:

Share on facebook
Share on twitter
Share on linkedin

Leave a Reply

Your email address will not be published. Required fields are marked *

What would you like to read about?

Choose your topics and we will share analysis and latest news once every week.

Featured Products

What would you like to read about? 

Choose your topics and we will share our analysis and latest news once every week. 

post_image