Skip to main content

Benefits and perils of artificial intelligence



Artificial intelligence is the science of creating machines that can think. Those machines are able to understand the environment it is in and reacts using the corresponding actions that will maximises the chance of success (InforBarrel, 2011). The work of Alan Turing and the quick development of computer science, neurology, information theory and cybernetics have been the inspiration for many researchers to create a smart machine (Wikipedia, 2012). For AI to succeed it has to mimic the behaviour of the human brain (Oracle, 1997). Therefore it has undergone serious development in the last decade. A computer system can now solve algebra problems, demonstrate logical theorems and speak languages. Herbert Simon, one of the founders of AI, predicted that machines will able to do any work that a human can do (Oracle, 1997).
Today AI is used in many industries such as logistics, data mining, medical applications and many others. What made this a success was the increase of computational power of computers. In this article, I will be discussing the benefits and perils of artificial intelligence (AI). AI systems have many benefits. It has developed from a system that can play checkers, to a system that can diagnose diseases. The increase of the level of intelligence of the machine will mean an increase in its competency to deal more efficiently with difficult, complex and dangerous jobs currently performed by humans (InforBarrel, 2011). Machines can go about doing the job without needing any breaks, sick days or sleep. This means that machines will be able to do a lot more work than a human can and probably more efficiently (Brookshear, 2010). By getting a machine to do the dangerous work, we minimise the risk of harm considering that the machine doesn’t have feelings or emotions. Machines can be sent on expeditions to discover other planets or other unknown territories without risking human life. Smart machines can be used to assist elderly and disabled people. The jobs that a smart machine can do are limitless. Depending on the level of intelligence it is going to get, machines will be able to do anything committing fewer mistakes and completing the job more efficiently (InforBarrel, 2011). We have seen machines like Deep Blue and Watson that were able to think and act like humans. Deep Blue was able to beat Garry Kasparov in chess and Watson was able to win a quiz show (Wikipedia, 2012). The think that fascinated me about Watson was his answer to the last question in the show. In the last question, you have to give an answer and choose an amount of money that you are going to wager, if you get the wrong answer, you lose the money you wagered, if you get the correct won you win it. Watson gave the wrong answer, but to everyone’s surprise, he actually wagered a very small amount because had doubts over his answer. Thinking like a human right?

Comments

Popular posts from this blog

Standard and Formatted Input / Output in C++

The C++ standard libraries provide an extensive set of input/output capabilities which we will see in subsequent chapters. This chapter will discuss very basic and most common I/O operations required for C++ programming. C++ I/O occurs in streams, which are sequences of bytes. If bytes flow from a device like a keyboard, a disk drive, or a network connection etc. to main memory, this is called   input operation   and if bytes flow from main memory to a device like a display screen, a printer, a disk drive, or a network connection, etc., this is called   output operation . Standard Input and Output in C++ is done through the use of  streams . Streams are generic places to send or receive data. In C++, I/O is done through classes and objects defined in the header file  <iostream> .  iostream  stands for standard input-output stream. This header file contains definitions to objects like  cin ,  cout , etc. /O Library Header Files There are...

locking

DBMS Locking Part I (DBMS only) TECHNICAL ARTICLES -> PERFORMANCE ARTICLES [  Back  ] [  Next  ] DBMS is often criticized for excessive locking – resulting in poor database performance when sharing data among multiple concurrent processes. Is this criticism justified, or is DBMS being unfairly blamed for application design and implementation shortfalls? To evaluate this question, we need to understand more about DBMS locking protocols. In this article, we examine how, why, what and when DBMS locks and unlocks database resources. Future articles will address how to minimize the impact of database locking. THE NEED FOR LOCKING In an ideal concurrent environment, many processes can simultaneously access data in a DBMS database, each having the appearance that they have exclusive access to the database. In practice, this environment is closely approximated by careful use of locking protocols. Locking is necessary in a concurrent environment to as...

Difference between net platform and dot net framework...

Difference between net platform and dot net framework... .net platform supports programming languages that are .net compatible. It is the platform using which we can build and develop the applications. .net framework is the engine inside the .net platform which actually compiles and produces the executable code. .net framework contains CLR(Common Language Runtime) and FCL(Framework Class Library) using which it produces the platform independent codes. What is the .NET Framework? The Microsoft .NET Framework is a platform for building, deploying, and running Web Services and applications. It provides a highly productive, standards-based, multi-language environment for integrating existing investments with next-generation applications and services as well as the agility to solve the challenges of deployment and operation of Internet-scale applications. The .NET Framework consists of three main parts: the common language runtime, a hierarchical set of unified class librari...