**In this Generative Computing blog series, we’ll explore the alternative framing that large language models are best thought of as computational engines, not so different from the computers that we’re used to today.** Like computers, they take instructions (albeit written in natural language), and they process various kinds of input data, and transform it into output data. This is not a new observation—and arguably part of what we’re doing here is to give name to a trend that is already emerging in the field. However, I would argue that we’ve only begun to scratch the surface of embracing generative AI as computing. I believe that taking this idea seriously will lead us to new programming models for interacting with LLMs, new tools and patterns for LLM usage, and even new ways of training LLMs. At the core of our philosophy is a belief that the full potential of generative AI will be realized by weaving AI together with traditional software in a seamless way. Generative computing describes a worldview where LLMs are an extension of computer science, not some alien entity set apart from it.[^1]
0 commit comments