Actually, Alan Turing Did Not Invent the Computer

Blog January 8, 2014 9:00 am

Image taken from  Turing's Wikipedia page.

Alan Turing.  Image taken from Turing’s Wikipedia page.

Image taken from  Neumann's Wikipedia page.

John von Neumann.  Image taken from Neumann’s Wikipedia page.

 

That’s the bracing headline of Thomas Haigh’s article on Alan Turing that appears, appropriately enough, in the latest Communications of the ACM (Association of Computing Machinery, the premier organization of computer scientists).

Since the article is under a paywall, I want to bring out some of its best points.  The first is that the pioneers of the then-emerging discipline of computer science did not want their science to be about building computers (which was seen as the task of electrical engineers), but rather about something more.  And therefore they  reached out into the past and extracted Turing’s first 1936 paper, called “On computable numbers” — a paper firmly anchored in the disciplines of mathematics and logic — which then became the foundation of this new emerging discipline.   (Indeed, the ACM’s most prestigious award is named after Turing.)  Foundations of disciplines, Haigh says, are often constructed retrospectively.

Turing provided a crucial part of the foundation of theoretical computer science. There was no such thing as computer science during the early 1950s. That is to say there were no departments of computer science, no journals, no textbooks, and no community of self-identified computer scientists. An increasing number of university faculty and staff were building their careers around computers, whether in teams creating one-off computers or in campus computer centers serving users from different scientific disciplines. However, these people had backgrounds and appointments in disciplines such as electrical engineering, mathematics, and physics. When they published articles, supervised dissertations, or sought grants they had to be fit within the priorities and cultures of established disciplines. The study of computing always had to be justified as a means, not as an end in itself.

Ambitious computer specialists were not all willing to make that compromise and sought to build a new discipline. It was eventually called computer science in the U.S., though other names were proposed and sometimes adopted. To win respectability in elite research universities the new discipline needed its own body of theory. The minutiae of electronic hardware remained the province of engineering. Applied mathematics and numerical analysis were tied too closely to the computer center tradition of service work in support of physicists and engineers. Thus, the new field needed a body of rigorous theory unique to computation and abstracted from engineering and applied mathematics.

Turing was not, in any literal sense, one of the builders of the new discipline. He was not involved with ACM or other early professional groups, did not found or edit any journal, and did not direct the dissertations of a large cohort of future computer scientists. He never built up a laboratory, set up a degree program, or won a major grant to develop research in the area. His name does not appear as the organizer of any of the early symposia for computing researchers, and by the time of his death his interests had already drifted away from the central concerns of the nascent discipline.

When building a house the foundation goes in first. The foundations of a new discipline are constructed rather later in the process. Turing’s 1936 paper was excavated by others from the tradition of mathematical logic in which it was originally embedded and moved underneath the developing new field. In several papers historian Michael S. Mahoney sketched the process by which this body of theory was assembled, using pieces scavenged from formerly separate mathematical and scientific traditions. The creators of computer science drew on earlier work from mathematical logic, formal language theory, coding theory, electrical engineering, and various other fields. Techniques and results from different scientific fields, many of which had formerly been of purely intellectual interest, were now reinterpreted within the emerging framework of computer science.a Historians who have looked at Turing’s influence on the development of computer science have shown the relevance of his work to actual computers was not widely understood in the 1940s.1,4,5

Turing’s 1936 paper was one of the most important fragments assembled during the 1950s to build this new intellectual mosaic. While Turing himself did see the conceptual connection he did not make a concerted push to popularize this theoretical model to those interested in computers. However, the usefulness of his work as a model of computation was, by the end of the 1950s, widely appreciated within large parts of the emerging computer science community. Edgar Daylight has suggested that Turing’s rise in prominence owed much to the embrace of his work by a small group of theorists, including Saul Gorn, John W. Carr, and Alan J. Perlis, who shared a particular interest in the theory of programming languages.3 His intellectual prominence has been increasing ever since, a status both reflected in and reinforced by ACM’s 1965 decision to name its premier award after him.  [my emphasis]

He then ventures into the vexed question, for historians of computing, about who invented the first computer.  The answer, as he points out, is that it all depends on what you mean by “computer” and how you conceive of this entity — as an architecture, as a device, or as a mathematical abstraction.

The story behind all those “firsts” goes like this. From the late 1930s to the mid-1940s, a number of automatic computing machines were built. Their inventors often worked in ignorance of each other. Some relied on electromechanical relays for their logic circuits, while others used vacuum tubes. Several machines executed sequences of instructions read one at a time from rolls of paper tape. Thanks in part to a series of legal battles around a patent granted on the ENIAC these machines dominated early discussion of the history of computing and their creation has been well documented.

The “modern” or “stored program” computers from which subsequent computers evolved were defined by two interrelated breakthroughs. On an engineering level, computer projects of the late 1940s succeeded or failed based primarily on their ability to get large, fast memories to work reliably. The first technology proposed, by Eckert who oversaw the engineering of ENIAC at the University of Pennsylvania, was the mercury delay line. Freddy Williams, working on the computer project at Manchester University, was the first to successfully store bits on a cathode ray tube. These were the two dominant high-speed memory technologies until the mid-1950s.

On a conceptual level, the breakthrough was inventing what we could now call a computer architecture able to take advantage of the flexibility of these new memories. Historians agree that the first wave of modern computers under construction around the world during the late 1940s were all inspired by a single conceptual design, an unpublished typescript cryptically titled “First Draft of a Report on the EDVAC.” This unfinished document summarized discussions among the team working on a successor to ENIAC. Its title page named only John von Neumann as its author, though the extent to which he personally created the ideas within rather than summarizing the team’s progress has been much debated.

Finally, — and this is the part that was new to me — he pushes back at the claims made on Turing’s behalf by philosopher Jack Copeland (e.g. here).   He suggests that Copeland’s claims are well-documented but they lack a historical perspective.

Copeland is deeply knowledgeable about computing in the 1940s, but as a philosopher approaches the topic from with a different perspective from most historians. While he provides footnotes to support these assertions they are often to interviews or other sources written many years after the events concerned. For example, the claim that Turing was interested in building an actual computer in 1936 is sourced not to any diary entry or letter from the 1930s but to the recollections of one of Turing’s former lecturers made long after real computers had been built. Like a good legal brief, his advocacy is rooted in detailed evidence but pushes the reader in one very particular direction without drawing attention to other possible interpretations less favorable to the client’s interests.

As they say, read the whole thing!  [If you would like a pdf of the article, just send me an email.]

Leave a reply

required

required

optional


Trackbacks