Alan Turing

The Death of Alan Turing, the Apple Martyr

October 21, 2021 by Essay Writer

Half a century ago, Alan Turing disappeared.

A true war hero and mathematical genius, Alan Turing is also considered by many today to be the godfather of computer science and one of the precursors of artificial intelligence as a field of study. He radically, and forever, changed the destiny of Europe. But he also laid the foundations for a technological revolution that today affects every aspect of our lives.

A complex character with a broken destiny. A martyr of his time, particularly for his sexual orientation, his life nourished many fantasies. His death, however, remains a mystery to this day.

On June 7, 1954, Alan Mathison Turing, 41 years old and newly elected member of the Royal Society of London, was found lying dead on his bed. Next to him, an apple is half-eaten (a legend will make it the origin of the Apple logo, the well-known technology company).

The autopsy concluded that he had been poisoned with cyanide. It was then speculated, through an inquest, that Alan Turing would have administered himself the lethal dose of cyanide by soaking the found apple with poison before taking a bite out of it. This thesis was notably supported by two biographers: Andrew Hodges and David Leavitt, who believed that Alan Turing had wanted to replay the cult scene of his favourite fairy tale: Walt Disney’s Snow White and the Seven Dwarfs.

His mother, Ethel Sara Turin, didn’t believe in this suicide thesis, preferring instead that of accidental death. Indeed, for her, the death of her son was the consequence of a loose accumulation of highly toxic chemical material in his home. However, Hodges thought that Alan Turing had willingly messed up his scientific material to get his mother to reject any suicide claims.

But Turing’s mother was not alone. The specialist Jack Copeland also defended the accident theory. He explained that Alan Turing showed no signs of depression in the period before his death and that he even had a written list of projects to be carried out or in progress. He also expounded that Turing was engaged in experiments of all kinds and that he had cyanide for this purpose. According to him, the mathematician was careless or even imprudent when he was conducting these experimentations and could have inhaled a cyanide solution while trying to dissolve gold.

Other, more smoky, theses see his death as the work of the British secret service, considering him a potential risk to the communists. Finally, it has also been reported that he was a great fortune-telling enthusiast and that during an escapade at St Annes-on-Sea with the Greenbaum family, he saw a fortune-teller who told him something terrible that plunged him into deep sadness a few days before the world finds out about his death.

Thus, the exact conditions of the mathematician’s demise are still unknown, and unfortunately, this is likely to stand. What is certain is that Alan Turing’s disappearance was the macabre conclusion of the last, tragic years of his life. While the early 1950s were marked by a KGB espionage affair involving supposedly homosexual English Cambridge intellectuals – The Cambridge Five – Alan Turing, whose unique skills made him work on a host of sensitive subjects, was suspected. In this tense climate, in 1952, after an intimate relationship with one of his lovers was revealed, he was then convicted under the Criminal Law Amendment Act 1885 and forced to undergo chemical castration, which would have undesirable effects on his mental and physical health. From that date, he was also excluded from major scientific projects. Yet, as Copeland had pointed out in support of his thesis, Alan Turing seemed to be gradually getting back on his feet. His treatment had been discontinued about a year before his death and he was showing positive signs of health, returning to work.

Unfortunately, he passed away a few months later. In disgrace and some loneliness, like so many great geniuses before him in the past. His legacy, however, lives on. And the digital wave he fathered still sweeps the world today, taking everything in its path.

Read more

The Life of Alan Turing, and Turing Machine

October 21, 2021 by Essay Writer

Everytime when I am on the subway train, it is hard to see people who are not using smartphone. The world without smartphone is not imaginable anymore. There is a history for world to be shaped in this way, which is not that long but also not that short. Through the movie ‘The Imitation Game’, people got to know about Alan Turing. Alan Turing is the mathematician who first thought of machine that can think by itself. Thinking of something new and that idea to be get accepted by the public is not easy. I became interested in Alan Turing’s life as he is one of the influential person who led the world with IT, Information Technology.

Alan Turing was different from the childhood. He did not have a proper family background as his father had to work oversea from his birth. He was often raised by others, so he became too introverted without sociality. (B1, 198) When he was in English public school, Sherbourne, the headmaster of the school recognized Turing’s talent and said that he might be wasting his time in the school. (1) His level of thinking was much different from his classmates, and he was not social person who could to interact with people actively. In his alone life, Turing met a friend, Christopher Morcom, who get to become closest could work together and share his idea. Not only as a friend, this was the first time he started to get idea of his identity as homosexual. He loved Morcom so much as a beloved friend and was his first lover, but Morcom died after few years. (2) He suffered from Morcom’s death, but he thought that Morcom would look upon him so he changed his attitude towards people and study hard. Morcom remained in Turing as symbol of perfection in his entire life (B1 200). Their visible connection did last even later as he was awarded the ‘Morcom Prize for science’ which was donated by Christopher’s parents for his outstanding achievement. (B2 164) Alan Turing’s most famous and main achievement was accomplished during the World War II. Based on the knowledge he gained in his studies, he worked part time in British cryptanalytic department privately. Few years later when the war was declared by Britain in 1939, he took the full time doing cryptanalysis at Bletchley Park. (4) He had the team with the one main goal which was to break the Enigma code. This Enigma was the enciphering machine used by the German army to communicate securely. Once, it was cracked by the poles, however, German increased its security by changing the system every day which made Britain to take long time to understand the code.(6) From mid-1941, Turing worked whole day to figure out how to interpret the German messages. (4) As the keys were changing everyday, human’s speed to figure out the key for the day was barely possible. Turing thought of involving an electronic technology to follow up the speed, which was the starting of machine called Turing machine.

Turing knew that algorithm is usually defined as serial rules that can be followed in the correct mechanical way. He thought of machine to replace people to do this basic operation more accurately and quickly. (B1 205) Based on the analysis of Turing on the concept of calculation, it can be concluded as anything that can be calculated by any algorithmic process can be calculated by a Turing machine. (B1 210) For Turing machine to be exist, only three simple operation should be worked: change the symbols in a square you are reading, move one space to the left or right, and change the possible state. (B1 211) Alan Turing’s invention, Turing machine, is highly recognized by the world. His invention is called as Universal Turing Machine, and every computer science major students learn about him. His name cannot be removed from the area of computer science. Even the awards for the computer scientists are named after him, Turing Award. (5) Nowadays, he is famous person in the area of mathematics, his end was not so happy. He was accused for homosexuality which was an illegal act. He died in mental hospital when he was just aged 41, known as Cyanide poisoning which is considered as suicide. Only his mother denied his suicide as he was always followed behind by a plainclothesman. (B2 170) Alan Turing’s mysterious death is not clearly identified until now, but he is considered father of computer science todays. His new approach to make the machine to replace human made it possible to end the World War II.

Work Cited

  1. https://www.macs.hw.ac.uk/~foss/valentin/Alan_Turning.html
  2. http://www-groups.dcs.st-and.ac.uk/history/Biographies/Turing.html
  3. https://www.thefamouspeople.com/profiles/alan-turing-4223.php
  4. https://www.livescience.com/29483-alan-turing.html
  5. http://blog.castac.org/2015/03/how-influential-was-turing/
  6. https://www.iwm.org.uk/history/how-alan-turing-cracked-the-enigma-code
  7. (B1) 수학자 컴퓨터를 만들다
    (B2) 천재 수학자들의 영광과 좌절
Read more

The Biography of Alan Turing, Famous Computer Scientist

October 21, 2021 by Essay Writer

I chose Alan Turing after watching the 2014 film The Imitation Game, discovering the significant impact he had on World War II and his relatively unknown contributions. His work as a Computer Scientist, Engineer, Mathematician, and Philosopher lead to the foundation of many current technologies and new scientific fields. The inventions he both theorized and designed promoted a number of victories for the allies and supported the development of modern personal computers.

Turing was born on June 23rd of 1912, the son of a civil servant working for the Indian Civil Service (ICS). Thus, Turing was taught at top private schools, entering a well-known independent boarding school at age 13 by the name of “Sherborne School”. He began to show signs of high intelligence and talent amongst his teachers, gravitating towards his natural interest in the mathematics and sciences. Nevertheless, Turing lacked motivation and comprehension in his classical studies, such as English, often being criticized by his teachers for his messy handwriting. Alan Turing’s main motivation for his work is presumed to many to be for the science and betterment of others; not status, fame, or wealthiness. In terms of higher education, Turing studied mathematics for four years at Cambridge University, from 1931 to 1934, graduating as a first-class honors student. In 1936, Turing presented a paper that coined the notion of a universal machine, later referenced as the “Turing machine” with the capability of computing any calculations, a necessary predecessor to the modern computers of today. Afterward, he moved to New Jersey to study mathematical logic at Princeton University. Within two years, he completed his Ph.D. at the age of 26. Alan Turing then returned to Cambridge to work at a government job decrypting german communication messages, right around the start of World War II. </p><p>Enigma was a German device that encrypted messages meant to be used over radio signals for military and diplomatic communication. The machine used a mechanism that would scramble the twenty-six letters of the alphabet using a different code every day. Initially, Turner created the Bombe at Bletchley, a government facility, to decipher enigma messages. However, the number of possible combinations was far too great to calculate in one day, approximately 151 trillion. Nevertheless, he figured out that by inputting some known words at the start of each day it would limit the number of possibilities to 17,576 combinations. For example, distinguished abbreviations and words were identified every day in the german daily weather forecasts. The bombe would limit an incomprehensible amount of sequences of enigma codes to a manageable few, then used for further analysis. Later, at least two-hundred of the machines were built and placed in differing military locations all over England, the United States Navy designed their own version as well. By knowing the location of the German U-boats, the British could direct their allied convoys from them. The “breaking of enigma” was also essential for winning the allies the Battle of the Atlantic, a battle between the western allies and Germans for the power of Atlantic sea routes, and other crucial attacks such as D-day. Many historians estimate that without these advancements the war would have continued for two more years and cost two million more lives.

After the war, Turing moved to London working for the National Physics Laboratory, designing computers for the government. During this time he was credited for designing the first personal computer and what went on to be known as a Universal Turing Machine, a machine able to be programmed to perform any calculation. He also went on to serve in the Mathematics and Computing departments at the University of Manchester. In a 1950s paper referencing future artificial intelligence, he created the Turing Test. A test determining whether a computer can exhibit human behavior that is indistinguishable to a human. This benchmark influenced and sparked the discussion over artificial intelligence. In the middle of this cutting-edge work, during an investigation into the burgling of Alan Turing’s house, he confessed to a relationship with another man. Being charged and convicted with “gross indecency”, he was served with twelve months of hormone therapy or imprisonment. After a year off the treatment, he was found deceased in his bed, poisoned by cyanide. The official coroner’s investigation concluded that the death was by suicide. A popular theory was that his suicide was a result of the symptoms of the strong medications and taken by a half-eaten apple found near his bed. While the apple had never been tested for cyanide, some had theorized that death could have been caused by accident, due to unintentionally inhaling hazardous fumes from a laboratory in his house. Some have also gone on to rule his death as a murder by the secret services, as Turing had a hold on significant classified material and homosexuals were noted as threats to national security. After a deliberate internet campaign, British Prime Minister Gordon Brown candidly apologized for Turing’s “utterly unfair” treatment by the government. Years later, in 2013, Queen Elizabeth II granted Turing a royal pardon.

Bibliography

  • Alan Turing. ​Biography.com​, A&E Networks Television, 16 July 2019, www.biography.com/scientist/alan-turing.
  • Alan Turing. ​Wikipedia​, Wikimedia Foundation, 23 Oct. 2019, en.wikipedia.org/wiki/Alan_Turing.
  • Code Breaking. ​History TV​, www.history.co.uk/history-of-ww2/code-breaking.
  • Copeland, B.J. Alan Turing. ​Encyclopædia Britannica​, Encyclopædia Britannica, Inc., 2019, www.britannica.com/biography/Alan-Turing.
  • History – Enigma (Pictures, Video, Facts & News). ​BBC​, BBC, www.bbc.co.uk/history/topics/enigma.
  • Hodges, Andrew. Alan Turing — a Short Biography. ​Alan Turing – a Short Biography​, 1995, www.turing.org.uk/publications/dnb.html.
  • Hodges, Andrew. Alan Turing. ​Stanford Encyclopedia of Philosophy​, Stanford University, 30 Sept. 2013, plato.stanford.edu/entries/turing/.
  • Hodges, Andrew. IWonder – Alan Turing: Creator of Modern Computing. ​BBC​, BBC, www.bbc.com/timelines/z8bgr82.
  • Smith, Chris. Cracking the Enigma Code: How Turing’s Bombe Turned the Tide of WWII. BT.com​, BT, 2 Nov. 2017, home.bt.com/tech-gadgets/cracking-the-enigma-code-how-turings-bombe-turned-the-tide-of-wwii-11363990654704.
  • Staff, IWM. How Alan Turing Cracked The Enigma Code. ​Imperial War Museums​, 3 June 2002, www.iwm.org.uk/history/how-alan-turing-cracked-the-enigma-code.
Read more

How the Computer Was Invested Based on the 1930s Alan Turing’s Work and 1950s John Von Neumann

October 21, 2021 by Essay Writer

Introduction

The invention of computers–based on the work of Alan Turing in the 1930s and John von Neumann in the 1950s–quickly gave rise to the notion of artificial intelligence, or AI, the claim that such nonhuman machines can exhibit intelligence because they mimic (or so its proponents claim) what humans do when they do things we regard as being evidence of intelligence.

From about the late 1960s to the middle of the 1980s there was a great deal of excitement and debate among philosophers, psychologists, learning theorists, and others concerning the possibility and status of AI. Mostly there were AI champions and AI detractors, with little middle ground. That controversy seems to have cooled of late, but new developments in the computer-engineering field may now take us past those earlier debates. (Agre & Chapman 1987)

Research in the provocatively named field of artificial intelligence (AI) evokes both spirited and divisive arguments from friends and foes alike. The very concept of a “thinking machine” has provided fodder for the mills of philosophers, science fiction writers, and other thinkers of deep thoughts. Some postulate that it will lead to a frightening future in which superhuman machines rule the earth with humans as their slaves, while others foresee utopian societies supported by mechanical marvels beyond present ken. Cultural icons such as Lieutenant Commander Data, the superhuman android of Star Trek:

The Next Generation, show a popular willingness to accept intelligent machines as realistic possibilities in a technologically advanced future. (Albus 1996)

However, superhuman artificial intelligence is far from the current state of the art and probably beyond the range of projection for even the most optimistic AI researcher. This seeming lack of success has led many to think of the field of artificial intelligence as an overhyped failure–yesterday’s news. Where, after all, are even simple robots to help vacuum the house or load the dishwasher, let alone the Lieutenant Commander Datas. It therefore may amaze the reader, particularly in light of critical articles, to learn that the field of artificial intelligence has actually been a significant commercial success.

In fact, according to a 1994 report issued by the U.S. Department of Commerce, (Critical technology assessment…1994) the world market for AI products in 1993 was estimated to be over $900 million!

The reason for this is, in part, that the fruits of AI have not been intelligent systems that carry out complex tasks independently. Instead, AI research to date has primarily resulted in small improvements to existing systems or relatively simple applications that interact with humans in narrow technical domains. While selling well, these products certainly don’t come close to challenging human dominance, even in today’s highly computerized and networked society.

Some systems that have grown out of AI technology might surprise you. For example, at tax time many of you were probably sitting in front of your home computer running packages such as TURBOTAX, MACINTAX, and other “rule-based” tax-preparation software. Fifteen years ago, that very technology sparked a major revolution in the field of AI, resulting in some of the first commercial successes of a burgeoning applied-research area. Perhaps on the same machine you might be writing your own articles, using GRAMMATIK or other grammar-checking programs. (Araujo & Grupen 1996) These grew out of technology in the AI sub field of “natural language processing,” a research area “proven” to be impossible back in the late 1960s. Other examples range from computer chips in Japanese cameras and TVs that use a technique ironically called fuzzy logic that improves image quality and reduces vibration, to an industrial-scale “expert system” that plans the loading and unloading of cargo ships in Singapore.

If you weren’t aware of this, you are not alone. Rarely has the hype and controversy surrounding an entire research discipline been as overwhelming as it has for the AI field. (AI: The Tumultuous History…1993) The history of AI includes raised expectations and dashed hopes, small successes sold as major innovations, significant research progress taking place quietly in an era of funding cuts, and an emerging technology that may play a major role in helping shape the way people interact with the information overload in our current data-rich society.

Where Artificial Intelligence Has Been

Roughly speaking, AI is more than fifty years old–the field as a coherent area of research usually being dated from the 1956 Dartmouth conference. That summer long conference gathered ten young researchers united by a common dream: to use the newly designed electrical computer to model the ways that humans think. They started from a relatively simple-sounding hypothesis: that the mechanisms of human thought could be precisely modeled and simulated on a digital computer. This hypothesis forms what is, essentially, the technological foundation on which AI is based.

In that day and age, such an endeavor was incredibly ambitious. Now, surrounded by computers, we often forget what the machines of forty years ago looked like. In those early days, entering a program into difficult-to-use, noisy teletypes, which were interfaced with large, snail-paced computers, largely performed AI. After starting the program (assuming one had access to one of the few interactive, as opposed to batch, machines), one would head off to lunch, hoping for a run of the program before the computer crashed. In those days, 8K of core memory was considered major computing memory, and a 16K disk was sometimes available for the main memory. In fact, anecdote has it that some of the runs of Herb Simon’s earliest AI systems used his family and students to simulate the computations–it was faster than using the computer! (Beer 1990)

Within a few years, however, AI seemed really to take off. Early versions of many ambitious programs seemed to do well, and the thinking was that the systems would progress at the same pace.

In fact, the flush of success of the young field led many of the early researchers to believe that progress would continue at this pace and that intelligent machines would be achieved in their lifetimes. A checkers-playing program beat human opponents, so could chess be far beyond? Translating sentences from codes (like those developed for military use during World War II and the Korean War) into human-understandable words by computer was possible, so could translation from one human language to another be too much harder? Learning to identify some kinds of patterns from others worked in certain cases, so could other kinds of learning be much different? (Beer 1995)

Unfortunately, the answers to all of these questions turned out to be yes. For technical reasons, chess is much harder than checkers to program.

Translating human languages turns out to have very different complexities from those encountered in decoding messages. The learning algorithms were shown to be severely limited in how far they could go. In short, the early successes were misleading, and the expectations they raised were not fulfilled.

At this point, things started getting complicated. Waiting in the wings and watching carefully were a number of people who were sure that this new technology would be a failure. Both philosophers and computer scientists were sure that getting computers to “think” was impossible, and they confused the early difficulties with fundamental limits.

The problems were magnified tremendously by the naysayers, who were using arguments about the theoretical limits (a current example of such theoretical arguments is Searle’s Chinese room argument) to describe failures of current technology. In short, those waiting to call the field a flop felt sure they were seeing evidence to that effect. (Beer 1997)

One can dwell at length on the early failures; there were plenty to go around. But as should have been clear to AI’s critics, these failures were not tragic. In fact, often they were extremely informative. This should come as no surprise; after all, this is how science works. Past failures coupled with new technologies led to many of the major advances in science’s history. It was the failure of alchemy coupled with better measurement techniques that led to elemental chemistry; the newly rediscovered telescope coupled with the failures of epicycles led to acceptance of the heliocentric model of the solar system, and so forth.

In AI, these breakthroughs were less dramatic, but they were occurring. The exponential improvements in computing technology (doubling in speed and memory size every few years), coupled with increasingly powerful programming languages, made it easier for AI scientists to experiment with new approaches and more ambitious models. In addition, each “failure” added more information for the next project to build on. Science progressed and much was learned, often to the chagrin of AI’s critics. (Boden 1996)

Critics vs. Technology–The Example Of Computer Chess

A good example of how this progression occurred is in the area of chess-playing programs. By the end of the 1950s, computers were playing a pretty good game of checkers. A famous checker-playing program written by Arthur Samuel (who was not a very good player) had actually beaten him by the late 1950s, and in 1962 it beat Robert Nealy, a well-respected player (ex-Connecticut state champion). Chess seemed just around the corner, and claims that “in ten years, the best player in the world will be a machine” were heard.

It turns out, however, that checkers can be played fairly well using a simple strategy called “minimaxing.” Each move in checkers has at most a few responses, and searching for the best move doesn’t require examining too many possibilities. The complications of chess, on the other hand, grow very quickly. Consider:

There are twenty moves the first player can make, each followed by twenty possible responses. Thus, after each player has moved once, there are about four hundred possible chess boards that could result. The first player then moves again (another twenty or more possibilities), and thus there are now 400 x 20 = 8,000 possible ways the game could have gone. This sort of multiplying goes on for a long time–in fact, the calculation for how many total possibilities exist in a game of chess was estimated to be about 10[120]. For even today’s fastest supercomputer to examine all of the possibilities would take over 10[100] years–well beyond the probable death of the universe. (Brooks 1991)

Given the complexity of chess, it’s hardly surprising that early programs didn’t do very well. In fact, it wasn’t until 1967 that a chess program (written by Richard Greenblatt, an MIT graduate student) competed successfully against human players at the lowest tournament levels. Greenblatt used a number of techniques being explored by AI scientists and tailored them for chess play. His program played chess at a rating reported to be between 1400 and 1500, below (but not far below) the average rating for players in the U.S. Chess Federation — and certainly better than most human neophytes.

In 1965, while researchers were still trying to figure out how to get the chess-playing programs to overcome the combinatoric problems (i.e., the plethora of possible moves), Hubert Dreyfuss, one of the most outspoken critics of AI to this day, produced a report for the RAND Corporation that trashed AI. He argued, both philosophically and computationally, that computers could never overcome combinatoric problems. In fact, he stated categorically that no computer could ever play chess at the amateur level and certainly that no computer could beat him at chess. A couple of years later, he attempted to prove this by playing against Greenblatt’s program. He lost. (Dorffner 1997a)

Now, nearly thirty years later, the world’s best chess player is still not a machine. However, today there are a number of computer programs playing at the master level, and a few that are breaking into the rank of grand master. In a recent official long game, a computer beat a player ranked as the thirtieth best in the United States. Reportedly, in an “unofficial” short game recently, a chess program running on a supercomputer beat Gary Kasparov, arguably the best human player in the world. Most AI researchers believe that it is only a matter of a few years until computer chess programs can beat players of Kasparov’s caliber in official long matches.

What was happening in chess was happening (although somewhat less dramatically) in many other parts of the field. It turned out that most of the problems being looked at by AI researchers suffered from combinatoric problems, just as chess had. As in chess, coming up with both better machines and, more important, better techniques for “pruning” the large number of possibilities, led to significant successes in practice. In fact, in the late 1970s and early ’80s, AI was ready to come out of the laboratories, and it would have a great impact on the business (and military) world. (Dorffner 1997b)

The Breakthrough: What’s Hard Is Simple

What was the realization that led to the first successes of AI technology? The intuition was very simple. Many of the first problems that AI looked at were ones that seemed easy. If one wanted to try to get the computer to read books, why not start with children’s stories–after all, they’re the easiest, right? If one wanted to study problem solving, try basic logic puzzles like those given on low-level intelligence tests. In short, it seems obvious that to try to develop intelligent programs, one should first attack the problems that humans find easy. The real breakthrough in AI was the realization that this was just plain wrong! (Franklin 1885)

In fact, it turns out that many tasks that humans find easy require having a broad knowledge of many different things. To see this, consider the following example from the work of Roger Schank in the mid-1970s. If a human (or AI program) reads this simple story: “John went to a restaurant. He ordered lobster. He ate and left.” and is asked “What did John eat?,” the answer should be “Lobster.” However, the story never says that. Rather, your knowledge about eating in restaurants tells you that you eat what you order. Similarly, you could figure out that John most likely used a fork, that the meal was probably on the expensive side, that he probably wore one of those silly little bibs, and so forth. Moreover, if I mentioned “Mary” was with him in the restaurant, you’d think about social relations, dating customs, and lots more. In fact, to understand simple stories like this, you must bring to bear tremendous amounts of very broad knowledge. (Funes & Pollack 1997)

Now consider the following “story” from the manual for a personal computer hard-disk drive.

If this equipment does cause interference to radio or TV reception, which can be determined by turning the equipment on and off, the user is encouraged to try to correct the interference by one or more of the following measures: reorient the receiving antenna, relocate the computer with respect to the antenna, plug the computer into a different outlet so that the computer and receiver are on different branch circuits.

For a human, this story seems much harder to understand than the one about the lobster. However, if you think about it, you’ll realize that if your computer was given a fairly narrow amount of knowledge (about antennas, circuits, TVs, etc.) it would be able to recognize most of the important aspects of this story. No broad knowledge is needed to handle this. Rather, “domain specific” information about a very narrow aspect of the world is sufficient. In fact, this is much easier information to encode. Thus, developing a system that is an “expert” in hard disks is actually much easier than developing one that can handle simple children’s stories. (Johnson 1989)

Many of these narrow technical domains can be of great use. Recognizing what disease someone has from a set of symptoms, deciding where to drill for oil based on core samples, figuring out what machine can be used to make a mechanical part, configuring a computer system, troubleshooting a diesel locomotive, and hundreds of other problems require narrow knowledge about a specific domain. Building a system that has an expertise in a specific area proves, in many ways, to be easier than making one for “simple” tasks.

Spurred by this realization, AI researchers developed programming technologies, known as rule-based systems or blackboard architectures, in the mid-to late 1970s. By the early 1980s, the term expert system came to be used to describe a program that could reason (or more often, help a human reason) through a specific hard problem. The rule bases could be embedded as parts of larger programs (such as control systems, decision support tools, CAD/CAM tools, and others) or used by themselves with humans providing inputs and outputs. As more and more industries and government agencies began to realize the potential for these systems, small AI companies were started, major companies started AI laboratories, and the AI boom of the 1980s was on. (Langton 1989)

The Artificial Intelligence Boom

The 1980s was an exciting time to be an AI scientist. One didn’t know upon waking up in the morning if he would hear a news story about how AI was the magic bullet that would solve all the world’s woes or a critical piece about why expert systems weren’t really “intelligent,” written by the critics who had crowed over AI’s failures (and were eating crow over its perceived successes). Attendance at AI conferences swelled from hundreds in the late 1970s to thousands in the early 1980s. Any company that could build rule bases and afford some basic equipment declared itself an AI expert.

The early days of AI entrepreneurship were very similar to those of biotechnology or other high-technology industries. Many companies started, but most were unsuccessful. The few successes had to change products and techniques based on market forces; many look very different than what their founders expected. (Newell 1982)

However, AI technology, as it has matured and transitioned, has also become easier to use and more integrated with the rest of the software environment. So prevalent is this technology today that virtually every major U.S. high-technology firm employs some people trained in AI techniques–in fact, according to the Commerce Department report cited earlier, an estimated 70-80 percent of Fortune 500 companies use AI technology to some degree.

Strangely, despite the economic success of this technology, its long-run effect was to give AI something of a black eye in the marketplace. There are many reasons for this, but basically they boil down to a striking phenomenon: AI is a victim of its own success. So fast was the transition of this technology into the marketplace that in only ten years the necessary technology fell in cost and complexity by about one to two orders of magnitude.

Ten years ago, a special computer costing tens of thousands of dollars was needed to develop expert systems, but now they can be developed on generic workstations or even personal computers costing only a thousand dollars or so. Where the development environment for an expert system (called a shell) used to cost $20,000, today one can be bought for the price of the manuals. (In fact, numerous shells are available free on the Internet.) Thus, having the ability to build expert systems is no longer a high-cost investment; now anyone technically competent can do it, and do it cheaply. (Pfeifer & Scheier 1998)

The Artificial Intelligence ‘Bust’?

Unfortunately, there was a negative consequence of this drop in cost that emerged in the mid-to late 1980s. Because a lot of money was being invested in AI and anyone could enter the field, a great many people did so. Unfortunately, many of these newcomers had not learned the historical lessons of AI. Its rapid progress on some problems caused many to feel this would be easy to extend to other problems–if AI could handle hard tasks, certainly it could handle “easier” tasks such as reading newspapers, translating languages, playing games, and similar tasks. Even worse, people with little concept of the combinatorics of AI tasks would underbid on big development projects and then be unable to deliver two or three years later. Thus, many who joined late, unaware of the field’s history, made many of the same mistakes as had been made in the earliest days of the discipline. (Steels 1994)

Moreover, it turned out that many of the best expert systems didn’t function by themselves. Instead of being stand-alone systems that dispatched wisdom, expert systems turned out to be most useful when hidden behind larger applications. Take, for example, the DART (Dynamic Analysis and Replanning Tool) system developed in the early 1990s by Bolt, Baranek, and Newman. DART is a military transport planning program that was used by the U.S. military in Desert Shield and Desert Storm. It works by providing a graphical interface in which humans enter information about what materiel is going where and when. The system uses its knowledge to project delivery dates and to recognize possible problems in meeting those dates.

When a problem is found, DART does not fix it. Rather, it reports the information to the human user and asks what to do. Thus, the expertise in this system is not in making the “intelligent” decisions about what to do but rather in taking into account fairly prosaic low-level details and managing them for the user. In fact, this is true of most successful expert systems–the system functions more like a well-trained assistant than like an expert.

This is not a condemnation, however. DART is credited by the personnel at the U.S. Advanced Research Projects Agency (ARPA), the main government funder of AI research, as having “more than offset all the money that [ARPA] had funneled into AI research in the last 30 years.” (Vaario & Ohsuga 1997)

Unfortunately, despite the success of programs like DART, their interactive nature helped feed into a subtle negative perception that expert systems were not successful. Basically, once the tools reached a certain point of maturity, it became relatively easy to see how these systems worked. Understanding that the programs were only manipulating simple facts or recognizing simple patterns, people realized the programs were not “intelligent” at all, that humans were providing most of the “thinking” and the AI systems were just managing details. This gave the naysayers more ammunition–expert systems clearly were not intelligent by any obvious definition. Given the hype over these systems, many people were disappointed to find out that they were just relatively straightforward computer programs. In short, what was an industrial success proved to be insufficient to refute our critics’ condemnations–they won’t be satisfied until we build Mr. Data.(Tani & Nolfi 1998)

The Debate Goes On

Unfortunately, even great strides in information technology will not bring a “smart” computer. As the technology reaches fruition, again the AI field will be accused of just adding technology, not “developing intelligence.” I suspect that each time that AI surpasses our current expectations and achieves results changing the way we live, work, and interact with computers, the ever-present critics will be fight with us. In fact, probably no level of success will still the voices that accuse us of inflated claims, deflating our successes and denying, to the very end, the very possibility of artificial intelligence.

Read more

The Imitation Game Movie: a Review of Alan Turning’s Test

October 21, 2021 by Essay Writer

The Turing Test was designed by a man named Alan Turing in 1950. It was initially called the “imitation game.” Originally, the test was designed to differentiate between males and females. It was played with three people—a man, a woman, and an interrogator. The interrogator would go into a separate room and try to determine who was the man and who was a woman by asking various questions such as “How long is your hair?” or “Do you have an Adam’s apple?” Based on the answers to the participants’ replies, the interrogator would decide who was the man and who was the woman. Often times this wasn’t easy since the participants would be allowed to lie in order to try to throw the interrogator off.

Turing went a step further with the “imitation game” idea by incorporating computers into it. He believed that in approximately fifty years (today’s time) computers would be programmed to acquire abilities rivaling those of human intelligence. As part of his argument, Turing put forth the proposal in which a human being and a computer would be interrogated through textual messages by an interrogator who didn’t know which was which. Ideally, if the interrogator were unable to distinguish them by questioning, then it would be unfair not to call the computer “intelligent.” Passing this test was considered regularly and reliably fooling an interrogator at least 50% of the time.

Turing and Godwin both believed that anything that could pass the Turing Test was genuinely a thinking, intelligent being. In particular, they felt that passing the test illustrated that the computer had the ability to interact with humans by sensibly “talking” about topics that humans talked about. Also, passing the test according to Godwin reflected that the computer was able to understand how humans thought and interacted.

Despite Turing and Godwin’s obstinate belief that computers could think, many believed that this was not the case. In the book Can Animals and Machines Be Persons?, Goodman set out an objection called the “Chinese-box” argument. Essentially, a man (who had no knowledge of Chinese) would be placed in a box and textual messages similar to those found in the Turing Test would be displayed on the screen in either English or Chinese. Then, man inside the machine would give the appropriate responses in Chinese. Despite his lack of knowledge of Chinese, the man would be able to give responses by using a large “Chinese Turing Test Crib Book.” Ideally, the person inputting the questions would be unable to distinguish that man’s Chinese from a native speaker’s. That argument was extremely damaging.

By describing the Chinese-box argument, Goodman was pointing out that externally it would seem that the man in the box understood both English and Chinese when in reality he wasn’t “thinking in Chinese” the way he did in English – he was really just translating the symbols he saw into different symbols. Fundamentally, computers did the same thing. They would translate their binary code into symbols which we could understand. To do so, they would use rules analagous to those found in the “Chinese Turing Test Crib Book.” Overall, the Chinese-box argument supported the idea that a computer could cleverly imitate thinking and understanding but could never be a real, literal “thinker” or “person.”

Read more

The Imitation Game Movie: an Attempt to Solve the World’s Most Difficult Puzzle

October 21, 2021 by Essay Writer

The Imitation Game

Biographical film has been around for many years and is one of man’s favourite genres portraying an exciting insight into a significant person capturing histories events. Most people like this genre as they get to see an exciting version of a story in which fascinates them and people would rather be told the cute and happy versions of these films because they like to have all their movies end “happily ever after” but that’s not always the case. When you look at the meaning of biographical it is something completely different, we are meant to be told the truth and nothing but in these films to get a good perspective of what happened but when we watch films that have been historically distorted just to make the movie a more engaging narrative for the viewer’s we are given misconceptions as to what really happened. One film portrayed wrongly is the 2014 academy award winning “The Imitation Game” it follows a man during WW2 named Alan Turing on his quest to solve the world’s hardest puzzle.

Alan Turing is recently well known for his involvement into breaking Enigma supposedly an unbreakable code that could take 20 million years to try every combination. Alan Turing and another mathematician Gordon Welchman collaborated together to make the upgraded Bombe which was a deciphering machine but did not have the capacity or knowledge to break Enigma. The Bombe machine was firstly built on the 18th or March 1940 and greatly contributed to the war effort. Without Turing’s machine the war would have taken longer by est. 2 years and saved millions of lives. Turing and his fellow accomplices were told to stay quiet about the whole operation and that they should stay away from one another. Although Turing was a war hero he got trialled for indecency to another man and was forced to take pills to make him normal. Alan Turing, the world’s most important code breaker committed suicide on the 7th of June 1954 age 41 after poisoning his own apple with cyanide because he felt so weak and tired and all he wanted to do was live peacefully with his machines.

Alan Turing was played by Benedict Cumberbatch in the 2014 film “The Imitation Game” which the story of Alan Turing and the rise of his code breaking machine is told. The film includes many historically correct facts displaying on the surface quite an accurate representation of his life during WW2 and what it entitled for him. On closer inspection of the movie itself you can see some historical errors made in the movie and also they are not errors they are out there to create tension and add a certain dramatic affect to how his life really played out. Although there are many differences between the movie and non-fiction it isn’t too much of a problem as they are small things that don’t ultimately change our view of the person in focus and what happened to him.

Well throughout the film there are key events that have stood out to the viewers as being important to his life and his story but not all are true. The fact that he got investigated for being a Soviet Spy is false he was never suspected and in fact the real Soviet Spy worked in a whole other group to Turing they never contacted or met each other. The whole relationship in the movie which shifted the audiences view on certain characters was completely invented through thin air. Also the detective that uncovers Turing’s homosexual tendencies was stretched a bit far in the movie, he has a fake name and he never suspected Alan of being a spy only thought that he was suspicious.

Now although the movie made some false historical instances in their film they did stick to the real story of Alan Turing mostly all throughout the film. Alan’s personality is said to be a perfect recreation of the man himself with Benedict showing a massive amount of skill on stage. The Bombe in the movie looks nearly basically the exact same as the real one apart from the red wires that flow nearly completely over it most likely to give the perception that the machine was alive and that it was Alan’s baby and most prised piece of work.

The film won many prestigious awards such as best adapted screenplay and also winning the Palm Springs International Film Festival. This film is obviously looked upon in good nature winning multiple awards for best film and well adapted screen play. Biopic films have been around for so long that people really forget about what they are truly meant to be about. Biographical films such as this are indeed doing the name well but others use false accounts to spice the story up so more people will feel inclined to go watch it in the cinemas. At the moment the only thing producers worry about is money they don’t really care about how people can be portrayed in the wrong light and that it affects how that person will be remembered for years to come because people will base their judgments on what they have seen believing it to be true when in fact it is usually not.

By Angus

Read more
Order Creative Sample Now
Choose type of discipline
Choose academic level
  • High school
  • College
  • University
  • Masters
  • PhD
Deadline

Page count
1 pages
$ 10

Price