Gary Marcus and Ernest Davis published a piece in The New Yorker debating the essential role of coding in the future of computer technology. While they recognize the trend towards computer automation and simplification, they also highlight problems standing in the way of the argument that programing will soon become superfluous:
The impulse to simplify programming started in the early days, when computers were still room-sized mainframes made up of vacuum tubes. The advent of machine language, for instance, allowed computers to carry out new tasks by simply loading new programs, replacing the labor-intensive, error-prone process of physically rewiring and rearranging vacuum tubes and jumper cables. Early high-level languages, such as LISP, FORTRAN, and Cobol, developed in the late nineteen-fifties and early nineteen-sixties, allowed programmers to work with abstract constructs like loops and functions. (All of those early languages are still in use; a great deal of data analysis, for example, still relies on a set of linear-algebra routines that were written in FORTRAN.)…
This all sounds great. But before we reach the era of self-programming computers, three fundamental obstacles must be overcome.
First, there is currently no method for describing what a piece of software should do that is both natural for people and usable by computers. Existing “formal specification languages” are way too complex for novices, and English itself is still way beyond machines. Programs like Siri have improved dramatically in recent years, and they can comprehend English in limited contexts. But they lack the precision required for building computer programs. When Siri hears “What Italian restaurants are around here?” she knows your location, and it’s fine that she only understands the words “Italian” and “restaurant.” But there is a world of difference between “Delete every file that has been copied” and “Copy every file that has been deleted.” For now, there is no reliable way to make a computer understand the difference between the two…
Each Tuesday is EducationTuesday here at Adafruit! Be sure to check out our posts about educators and all things STEM. Adafruit supports our educators and loves to spread the good word about educational STEM innovations!
“For now, there is no reliable way to make a computer understand the difference between the two…”
Come on, that is why there is natural language technology. There are applications that are able to make the difference. Also one problem with Siri is that it has to ‘hear’ spoken text and make it digital. Typed in text is already digital, which saves a lot of trouble.
Please speak to some long-time programmers and software engineers about this subject! “Learning to code” is by no means the most important part of writing software, it’s very much like “learning to print” or “learning to draw”, just a means to a representation of an idea.
Education should stress the importance and methods of problem solving, of logical thinking, of talking to people who will use your end product …
I’m a software developer – but do I consider myself as writing code for a living? Sure, “code” is the end result sometimes, but that’s just a small part of the process, and in some ways the least important one.