Artificial Intelligence for Computational Sustainability: A Lab Companion/Constraint-Based Reasoning and Optimization

Overview edit

Many problems in sustainability require moving beyond the classic goal-based search methods of the previous chapter, to include knowledge of hard constraints that cannot be violated, as well as preferences for one solution state over others. Constraint-based reasoning generally connotes the use of hard, inviolable constraints to limit the space of solutions that can be obtained for a problem -- an acceptable solution is one that does not violate any hard constraints. A dead-end in a classic goal-based search can generally be viewed as a state that violates a hard constraint (as does all of its descendants given the available operators), though the constraint is rarely explicated in the classic search paradigm.

Optimization problems assume that some solutions are preferable to others -- that there is a partial ordering of quality over solution (or goal) states, and it is desirable to find the best or optimal of these states according to an objective function that is used to evaluate the quality of solution states. In the classic search paradigm, the only kind of preference for one goal state over another is in terms of the solution path cost, where goals with least cost paths are spoken of as optimal. Optimization problems generally allow for optimality to be defined in any way that can be defined by an objective function, which assigns a quality (or utility) score to a solution state.

AI textbooks that include substantive constraint and optimization discussions include Russell and Norvig, 2010[1], Chapters 4 and 6; Poole and Mackworth, 2010[2], Chapter 4. In general, because constraints and preferences are as pervasive as search itself, these concepts are sprinkled throughout AI textbooks -- machine learning approaches, for example, are often cast as optimization problems.

A computational sustainability perspective often suggests that a sustainability problem be cast as optimization and constraint-based problem. For example, Conrad, et al (2010)[3] illustrates an optimization-with-constraints approach in designing a grizzly bear corridor, which allows grizzlies to move within a protected region. Hard constraints in such a problem might be that any corridor must connect existing protected regions. Factors that are included in the objective function that scores the quality of different solutions (i.e., corridors) can include the monetary cost of the land purchased for differing corridors -- if purchase cost were the only factor, then cheaper corridors (that satisfied the hard constraints) would be preferable. But there might be other factors as well, such as (lost) opportunity costs and the probabilities that grizzlies would stray outside the corridor boundaries as a function of differing terrains in a corridor.

Another class of factors in sustainability problems, when cast as optimization and constraint problems, stems from change. Optimization factors and constraints can change over time, and solution robustness in the face of environmental and societal change is critical. In the grizzly bear corridor design problem, human encroachment may be a more likely eventuality for some corridors than for others, or climate change may result in the change of habitat for one of the grizzlies favorite foods causing more grizzlies to stray. As another example, consider the placement of windfarms in a country such as China (see Powell and colleagues (ref)), where optimization techniques are used to find best placement of wind farms based on current wind trends, but these trends may evolve with climate change rendering the windfarm placement no longer optimal. There are no easy answers here, but characterizing solution robustness in the face of change is a critical challenge when applying optimization and constraint reasoning strategies to sustainability problems. After characterizing the influences that effect robustness, the characterizations can be used as 'meta' optimization objective functions or folded into the base objective functions themselves. (this may be a good way of motivating stochastic optimization and transitioning from deterministic approaches).

Sources edit

  1. Russell, S. & Norvig, P. (2010). Artificial Intelligence: A Modern Approach (Third Edition). Prentice Hall, NJ
  2. Poole, D. and Mackworth, A. Artificial Intelligence: Foundations of Computational Agents, Cambridge University Press is freely available on the Web (http://artint.info/index.html)
  3. Conrad, J., Gomes, C., van Hoeve, W., Sabharwal, A., and SuterConrad, J. Incorporating Economic and Ecological Information into the Optimal Design of Wildlife Corridors, Computing and Information Science Technical Reports,URI:http://hdl.handle.net/1813/17053. Cornell University, Ithaca, NY, 2010