Validating Model Results
[The main question is,] how the field assesses the quality and trustability of results coming out of the software models—in other words, how does anybody know that the software is working correctly? I’ve been particularly fascinated by this issue for some time. Unlike many other domains in which software is created, in climate studies, the correct answer or behavior is not known in advance; that’s why we need the software in the first place.[There are] a number of ways the community validates climate code. Some of these are strategies that any developer should recognize, while others are uniquely fitted to this domain.Testing the CodeClimate code is typically very well tested. ...Good testing practices, such as regression testing and tracking issues to completion, are typically adopted. Nightly builds and automated test suites catch a lot of inadvertent changes.Divide-and-Conquer StrategiesAt a high level, climate models tend to have four major components that model the atmosphere, the ocean, sea ice, and land. For purposes of validation, the components can be separated and their output judged independently by experts in the relevant domain. Each of these comparisons is a large undertaking that can occupy a research scientist for years. But, as Robert suggests, this approach represents a "step-by-step approach to building confidence in the pieces."
Thursday, July 12, 2012
Validating Model Software
Some interesting thoughts from Forrest Shull's article "Assuring the Future? A Look at ValidatingClimate Model Software"
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment