Large language models (LLMs) — the machine learning models that are behind systems such as ChatGPT, Gemini, and Claude — are often criticized for lacking proper “understanding” and the ability to “reason” with their data. They are seen as stochastic, autocomplete engines. In the March AIChE Journal Perspective article, “Do large language models ‘understand’ their knowledge?”, Venkat Venkatasubramanian calls for a more nuanced understanding of what it means for these models to understand and reason with the knowledge they process.
He proposes that LLMs do develop an animal-like empirical understanding of their domain, which is adequate for some applications. However, this representation is usually constructed from incomplete and noisy data and, therefore, lacks robustness, generalization, and explanatory power, which are critical to applications in science and engineering.
He explores the fundamental differences between algebraic and geometric representations in understanding and solving problems. He compares them to numbering systems like Indo-Arabic and Roman numerals to emphasize how the choice of representation can simplify or complicate problem-solving.
Using the Hindu...
Would you like to access the complete CEP Article?
No problem. You just have to complete the following steps.
You have completed 0 of 2 steps.
-
Log in
You must be logged in to view this content. Log in now.
-
AIChE Membership
You must be an AIChE member to view this article. Join now.
Copyright Permissions
Would you like to reuse content from CEP Magazine? It’s easy to request permission to reuse content. Simply click here to connect instantly to licensing services, where you can choose from a list of options regarding how you would like to reuse the desired content and complete the transaction.