The so-called “Efficient Compute Frontier” (ECF) refers to an apparent hard constraint on the achievable error reduction as a function of the amount of computational work incurred when processing training data for LLMs (large language models). The Artificial Intelligence (AI) community has questioned if this previously unknown and unexpected constraint represents some kind of fundamental law of nature. We present a model of LLM neural-network dynamics that exhibits power-law behavior and matches the ECF constraint, C_min (N)= aN^−b . The prefactor a = 0.00000001 sets the scale of the neural-network connections (on the order of billions), while the exponent b= 0.05 is indicative of subnetwork correlations that are much stronger than Zipf's law. In this way, we are able to answer the original question in the negative. Our result notwithstanding, and given that the 2024 Nobel Prize in Physics was shared by an AI researcher, this burgeoning area of Generative AI would seem to offer fertile ground for interdisciplinary physics.