The Future of AI Technology?

0
185

The digital world is coming, whether we want it or not. Every minute we have close to millions of Google searches or Facebook posts. Soon, everything around us including our microwaves or even our clothing will be connected to the internet. Everything will be intelligent. We will not only have smart phones, but smart homes and smart cities. However, one thing is clear, in the near future the way we organize the economy and society will be changing. There are many options to implement AI technology into a political setting, such as China’s “citizen score”. However, ensuring a safe and effective transition to AI that will benefit many sectors of society will be difficult.

One of the largest concerns for AI is the jobs that it will take. It is predicted that by 2030, machines will have surpassed humans in all areas. Technology visionaries, such as Elon Musk from Tesla Motors, Bill Gates from Microsoft and Apple co-founder Steve Wozniak, are warning that super-intelligence is a serious danger for humanity. Recently, Google’s DeepMind algorithm taught itself how to walk. Who knows what AI can learn next? We must always ensure that Ai is under our thumb, and not the reverse, as the world would be quite boring if all humans were trained from birth to be a AI programmer. Once AI is let loose, policy makers must make sure our freedom is intact. Once AI takes over, we will have restricted freedom over what we can do or what we can change. I agree that AI is definitely helpful, but it may be too helpful. The idea of being able to lay back all day and not do any work sounds great, but work is what gives life meaning and purpose. Allowing AI to take care of everything sounds exactly like the plot of WALL-E. AI must be restricted towards only things that are dangerous towards humans like coal mining, or small things like driving as the most important thing about this is the right to keep our jobs.

 

Politicians can use AI to “nudge” people. The state of the internet as of right now gives you a sense of freedom, but in reality, Google and Facebook are collecting loads of metadata to target their ads towards you. For example, if they knew that you really liked McDonald’s from your searches, your ads will be more burger deals. Eventually, this leads to not the computer that is programmed, but instead more of a programmed person. Politicians can use ads or a series of internet “lures” to nudge you in the direction they think that the society should go. This is neither a good or bad thing, as it could come to a very peaceful and law abiding population without the sense that you are controlling them. It could however, be used in the wrong way when politicians use it to their own personal benefit, to use this to shift the attention to perhaps their campaign page. The fact is, whoever controls technology in an election is destined to win, as they have control over the internet and therefore people. The “nudging” could also create negative outcomes that are unpredictable. For example, during the German swine flu outbreak in 2009, people were encouraged to be vaccinated to prevent infection. However, an unexpected consequence was that a percentage of people who received the vaccine were affected by a new disease. Terrorists as well will try to obtain this technology, if AI is so talented, it can teach itself to hack into government programs and large corporations, causing widespread panic. It looks as if AI is not a good thing for us at the moment, but if we can find a good way to implement such a program without such consequences, then it will be revolutionary.

One of the hardest things to get past about AI is to align it with society’s core values. Technology that is uncontrolled is just to lead us down the path to a totalitarian world, where we do not govern technology, but instead technology governs us. We should not improve AI for them to be beyond our assistants. Limiting innovation for AI in this scenario would be the optimal solution. The fact that something could be just as good if not better at something that humans are, is incomprehensible to us as of right now. For example, surgeons and therapists were thought to be irreplaceable, but recently, advancements in technology made robots so precise that they could sew a grape skin back onto a grape. In order to have safe use of AI, we must have a very transparent way of implementing it. Where the people vote on whether or not AI is useful or not. For AI to work in our society, it must promote social and economic diversity and it must also promote responsible behavior of citizens in our new digital world.

So what does AI ultimately come down to? It comes down to how we use it, as AI can make or break us. Even if AI is better than us at doing things, should we really accept that and allow it to happen? AI is useful in the right hands, as described earlier, AI in the wrong hands could lead to skewed political results, unforeseeable consequences, cyber warfare, and more dangers still yet to be discovered. If we do not promote responsible use of AI, then visionaries like Elon Musk would be correct in saying that AI could plague our society. We have yet to understand the secrets that AI could be hiding, which is why we must take careful precautions when handling it. AI is a leap forward towards prosperity and innovation, but only if it is wielded by the worthy people.


1/2/

Citations

Knight, Will. “There’s a big problem with AI: even its creators can’t explain how it works.” MIT Technology Review. May 12, 2017. Accessed November 09, 2017. https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.

Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari, Andrej Zwitter. “Will Democracy Survive Big Data and Artificial Intelligence?” Scientific American. February 25, 2017. Accessed November 09, 2017. https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/.

Ford, Paul. “Are We Smart Enough to Control Artificial Intelligence?” MIT Technology Review. February 01, 2016. Accessed November 09, 2017. https://www.technologyreview.com/s/534871/our-fear-of-artificial-intelligence/.