If Computer Science Is Doomed, What Comes Next?
Table of Contents
There was excitement in the lecture hall. Harvard University’s wildly popular (and open) introductory computer science course CS50 was joined by a guest lecturer in October: the co-founder of AI application-building platform Fixie, Matt Welsh. But here is how he began his lecture.
“I’m here to tell you that the field of computer science is doomed.”
If “computer science” means transforming ideas into something that a machine can run — well, the weakest link there… is the humans. As one slide put it…
“And I actually kind of mean this, although I’m going to put it in somewhat humorous terms.”
Welsh has also been a former principal engineer at both Google and Apple — and also a Harvard professor of computer science. But he’d arrived to confront the students with a fresh perspective — or with some harsh truth.
After 50 years of languages — desperately trying new data types and methodologies, and building elaborate ecosystems of tools — still we humans suck at programming. “And I don’t think another 50 years is going to solve it.”
But Welsh’s lecture went beyond the usual gloom to ask a much more interesting question: what happens next? How can we formalize the practice of working with large language models? What will our engineering teams even look like? And is there some essential human quality that needs to be preserved — or should we celebrate the end of our fallibility and the greater accessibility of the power of programming?
Finally, Welsh confronted head-on what’s perhaps the most important question of all: What, then, do we teach our young computer science undergraduates? Welsh delivered a thoughtful and well-informed attempt at what history may remember as a first-of-its-kind computer science lecture recognizing the start of a new era.
But what exactly does a gadfly tell a class of bright-eyed young CS students who are about to enter a field that’s being transformed by AI?
The Post-AI Software Industry
Earlier this year Welsh predicted “The End of Programming,” warning of a future with “humans relegated to, at best, a supervisory role.” But on his personal blog, Welsh wrote a companion piece trying to game out what that world should ultimately become. “What we need to figure out is what the post-AI software industry looks like and what we can do now to get ready for it.”
So back at Harvard, Welsh shared tales from the new frontier of coding with AI, where programmers are trying to acquire a new skill set: teaching AI models effectively. In a world where you have to use all capital letters to emphasize the words DO NOT to your large language model, “what are the best practices? And beyond best practices, can we turn this from effectively a dark art into a science? Into an engineering discipline?”
There’s something tongue-in-cheek about adding the word “engineering” to the phrase “prompt engineering,” Welsh said. “It’s not really a thing yet. But it may well be in the future if we do this right.”
A @CS50 tech talk at @Harvard on #LLMs and #programming with @mdwelsh. “Computer Science is headed for a major upheaval with the rise of large #AI models, such as #ChatGPT, that are capable of performing general-purpose reasoning and problem solving…”https://t.co/vcfsb7iz9c
— David J. Malan (@davidjmalan) October 31, 2023
Later Welsh provided a clarifying example: an engineer discovered a few months ago that “the magic words” were Let’s think step-by-step. “If you say that to the model, that somehow triggers it to go into computation mode now. It’s no longer just parroting back some answer. It’s actually going to say, ‘Okay, well, I have to actually elucidate each of my instructions.’” Later Welsh underscores a key point. “That was discovered empirically. It was not trained in any model. No one knew it was there. It was a latent ability of these models that, effectively, somebody stumbled across and wrote a paper about it…”
Answering questions later, Welsh argues “We have to derive the manual through experimentation.”
“The fear that I have is that students just view this thing as this magical black box that will do anything for them and have no critical thinking around that.”
So it’s not hard to get a mistake out of a large language model. The hard part is understanding why — and enough to know what to do next. “I do think that over time, we’re going to get to a place where programming ends up getting replaced by teaching these models new skills… Teaching them how to interface to APIs, and pulling data from databases, and transforming data, and how to interact with software meant for humans.”
“That’s going to become an entire discipline right there.”
Later he described it as a potentially “thorny” and “uncertain” problem. “How do we reason about the capabilities of these models in a formal way? That is, how can we make any kind of statement about the correctness of a model when asked to do a certain task?”
What Happens to Humans?
But when it comes to AI replacing human programmers, “I think this is all something that we really have to take seriously…” Welsh said. “I don’t think that this is just — I am exaggerating for effect. But the industry is going to change. So the natural question then is, well, what happens when we cut humans out of the loop? How do we build software? How do we ship product?”
Welsh ponders the ramifications of this world. Our current code optimizations like readability and reusability “are only because poor humans have to wrangle with this stuff.” But imagine a world where “It doesn’t really matter if it’s duplicative or repetitive or modular or nicely abstracted.” Welsh put up a diagram of how he envisions the software team of the future…
Welsh hedges that he’s “not sure” if all of computer science will one day become a historical artifact — but presents his vision of a “plausible” future, with people “not writing programs in the conventional way that we do today, and instead, having an AI do their bidding.” It happens partly through the use of platforms like Fixie, his company’s platform for easily creating AI-based applications.
The good news? A future like this unlocks the power of computing for a much vaster portion of the population. “That’s tremendously empowering. I think we should all, as a field, aspire to that level of access to the power of computing. It should not remain in the priesthood.”
Welsh isn’t blindly optimistic. “It is not to say that all the problems have been solved, nowhere near it.
“The biggest dirty secret in the entire field is no one understands how language models work. Not one person on this planet.”
An Undergrad Responds
One brave student in the audience asked whether the software engineering role might persist. Maybe programmers of the future might be 10,000 times more efficient when assisted by AI. And they raised the possibility that “not everything that makes the software engineer, the software engineer… is provided in actual data.”
Welsh thoughtfully summarized the question. “Maybe there’s an ineffable quality to being a human software engineer — something about our training, our knowledge of the world, our ethics, our socialization with other humans — that a language model is not going to capture.”
“I think it’s a good question.”
But Welsh seemed to focus on the other side of that scenario, where our poor human brains have their “bandwidth limit, which is an individual mind has to go through this syntactic description of what they want to do in these God-awful languages like CSS and JavaScript and Python and Rust… It’s a barrier to actually enabling what you could build with computation from actually becoming a reality. It’s like drinking through a very narrow straw.”
By the next question, Welsh was envisioning “the human and the AI model iterating together… where the AI model is doing the stuff it’s good at, the human is doing the things it’s good at.”
But the final question asked where all of this leaves the CS50 students of today. Will today’s “classical” programmer training be in any way helpful in a future where that entire layer has been abstracted away by AI-powered interfaces?
“That’s the real question.” Welsh said. He seemed to look back on his own career — learning how circuits worked in undergrad classes at Cornell, followed with graduate-level coursework on operating systems and systems programming and understanding “what’s a stack?”
But if society want its students to learn how the programs in their world are being created, “I think it would be a mistake for, say, university programs to not pay attention to this, and to assume that teaching computer science the way it’s been done for the last 25 years is the right thing in this future.”
Welsh went on to say he doesn’t have a specific vision of exactly what to teach. And granted, there’s already a large gap between academia and the real world. “But I do think that we have to think about, how do people reason about these models?”
Welsh expressed a hope for a programming class in the future that can “go deep into understanding some of the mechanics behind things like ChatGPT. Understanding data — how it comes in. Understanding how models are constructed, how they’re trained, what their limitations are, how to evaluate them. Because the fear that I have is that students just view this thing as this magical black box that will do anything for them and have no critical thinking around that.”
But he closed with the most startling pronouncement of all — and also the most human. “However, I do know from my own experience that it is a magical black box. And I don’t understand how it works.
“But see, I’m okay with that because it does so many great things for me.
“Anyway, thank you very much. And I’ll be around for pizza too.”
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.