Don’t blame Computer Science
A mentor of mine once told me that the youth build their careers by spearing an older fish. Sometimes they set their sights on the wrong fish.
Corey Pein, hot on the road of promoting his upcoming book, has recently published a story in The Baffler — Blame the Computer. It’s an interesting piece, although I find its first act, some anecdotal coverage of the Hawaiian missile alert, somewhat distracting from the heart of the piece. Pein’s primary target seems to be the field of Computer Science, and he is armed with a copy of Computer Power and Human Reason, some historical quips about the motivations of Frederick Terman & Louis Fein, and a general disdain for what Silicon Valley seems to be doing these days. With these tools, he paints the field as having a “myopic concentration of collective effort” focused on “gadget-enabled inquiry”. He hints to the problem of large Silicon Valley corporations combining techno-optimism with wealth and influence to skirt regulations and force others to adopt their worldview.
In many ways, he’s right. American Computer Science departments have benefited tremendously from their close ties to both military funding and corporations. In academia, Computer Science seems one of the few fields proud of their corporate sponsors (could you imagine the backlash in the biological sciences if Nature or Science was funded largely by Novartis, Pfizer and Merck?). America’s fear of the Soviet Union, triggered by Sputnik, created DARPA, which then funded the creation of the Computer Science department at MIT in 1962, with more money soon leading Berkeley, Carnegie Mellon, and Stanford to create their own divisions. Similar military-funded research helped fund other major engineering departments, with the creation of institutions such as MIT Lincoln Labs, Cornell Aeronautical Laboratory, and Stanford Research International, all of which are notable for their history in the birth of AI. Speaking of MIT, if one wonders how it came striving to become Harvard’s engineering department to a world-class institution in it’s own right, read this. GE & Bell Labs funding in the 1930s led into tremendous World War II spending from 1940–1945, which didn’t quite stop when the war ended. In 1939, MIT’s budget was $3M/year, of which 3% came from sponsored research. By 1949, their annual budget was $23M/year (mostly from sponsored research) and WW2 had given MIT over $100M. By 1964, MIT is spending over $115M/year, mostly from defense contracts. Cornell, by contrast, was very proud of the $10M they received during the entirety of WW2, and Princeton in 1947 was spending about $8M/year. War spending built the best engineering school in the country. By some metrics, MIT isn’t the best place to study Computer Science, that instead falls to Carnegie Mellon University. CMU CS department has it’s history of defense spending; for example, Geoff Hinton left that department the year after his famous backpropagation paper, partly because the lab was so heavily funded by American defense dollars.
Pein has more good points. He touches on how many new Computer Science graduates care more about using tools than understanding ideas, a problem handled with even less grace by the “bootcamp” industry. The success of black-box function approximators, advertised under the banners of Deep Learning, Machine Learning, and Artificial Intelligence, has made this problem even worse — fairly simple models (implementable in a few thousand lines of assembly), given large computers and large datasets, outperform most other methods. When the Director of Research of Google wrote in 2009, “simple models and a lot of data trump more elaborate models based on less data”, he hinted a trend soon to come, where thinking about problems gave way to tinkering with a simple method. We’re now in an era where modern stacks of linear algebra and basic calculus outperform methods based on careful scientific study. And given enough data, they are general enough to be state-of-the-art on other tasks.
So why study Computer Science? What is this field supposed to be? For the interested historians, it seems worthwhile to start at a 1966 Technical Report outlining Stanford’s goals in Computer Science. It presents some early definitions and goals for the department, including this fun note:
At the present time we would regard a B. S. in Computer Science as too likely to be a terminal degree, although it might be excellent preparation for graduate work in fields like business administration or medicine. Our recommendation for Stanford undergraduates interested in computing is that they major in mathematics and take our senior-graduate courses.
In general though, Forsythe defines CS as the application of mathematics to problems. And this seems to me, still a very fine goal. It also brings me to my central gripe about Corey Pein’s article. Pein ignores the intellectual foundations, inherent tensions, and contradictory world-views that lie at the heart of Computer Science as a field, from the very beginning. Computer Science departments began with a great tension that has never quite abated — they consist of two different groups, with related goals, but inherently different ways of viewing the world. Roughly half of Computer Scientists stem from mathematics, and are interested in building good computational concepts, models and abstract ideas, such as the early work of McCarthy. The other half stem more from the world of engineering, and have an interest in building systems that interact and process information from the real-world to turn it into something useful and tend to work on inverse problems. Folks with an interest in computer science theory, programming languages, logic, cryptography, and the like tend to former. Others, such as those interested in modern computer vision, applied machine learning,distributed computing, robotics, tend to the latter. Many other research areas in Computer Science are somewhere in between. In Europe, this difference is formalized, the first group studies Computer Science, while the latter fall into Information Engineering departments. I think George Forsythe said it best in 1965, the field should better be called computer and information sciences, highlighting those who were interested in modeling computation as an extension of mathematics, and those who are interested in understanding how information can be processed and what it tells us about the world. He emphatically distanced himself from making it a tool-based field, stating
One thing computer science is not: it is not merely the union of the applications of a computer to diverse problems.
Just because many greedy and self-centered people are now drawn to Computer Science degrees and technology companies, does not mean that the field itself somehow created them. It just so happens that in an era when the majority of the ten most valuable companies in the world make almost all their money on software, and the growth of successful small companies seems mostly on the back of software start-ups in an era of declining entrepreneurship,it seems clear that software can be a great way to make a lot of money. And in America, the academic department best suited to educate and credential such individuals is Computer Science. But it is not a field’s fault if many of the most profit-motivated, unethical, and myopic individuals in society now wear its colors. This is an import distinction to make, because one day, when the tech boom passes, and the money goes to biotechnology or robotics or something else, those type of people will still be around, they’ll just be go else. So if one think ethics or communication courses are important to functioning individuals, advocate that for all fields.