Power generation resources in Pakistan are being developed currently without due regard to their full financial implications, leading some critics to accuse the government of adding unnecessary capacity in the grid which ends up in higher-than- normal generation costs. This article examines the present criterion being used to determine generation requirements and whether it’s sound technically and sensitive to its financial implications. It concludes by offering some suggestions to improve it and also reflect new realities such as high shares of renewables and electric vehicles (EVs) in the grid and power sector’s move from the present single-buyer arrangement to a wholesale competitive market in the near future. To maintain a precise balance between demand and supply, an electric utility is obliged to keep sufficient generation capacity and stock of fuels in its system, called “resource adequacy”. It’s necessary to deal with three types of uncertainties: demand can rise or fall abruptly, generation plants can develop some unexpected fault, and weather could turn nasty—all of a random nature, predictable to some extent but not entirely. Utilities try to maintain enough additional generation capacity on top of the highest demand expected in their systems over a given period, normally one year, but for every year for next 20 to 25 years. This planned excess capacity is commonly termed as “reserve margin”. The above capacity and the associated capital investment needs are huge, and should force both our policy- and decision-makers to lose sleep How much generation resources will be adequate for a system is a question of how much reliability we want in our supply, essentially a trade-off between the cost of reliability and its value to consumers. The optimum reserve margin for a system depends on multiple factors and varies from one system to another. Generally, 15 to 25 percent is considered reasonable for most systems, though with effort and re-configuration of supply portfolio, good utilities can reduce it to as low as 10 percent. It’s, however, a deterministic standard (sort of a “rule of thumb”) and also an average value because due to “lumpy” nature of generation investments its actual value varies around this average by a few percent. To handle randomness in supply, demand, and weather, planners use a variety of techniques to assess the adequacy of different generation portfolios. One such technique is to restrict the risk of supply failure to meet demand below a set target, in terms of “loss of load expectation (LOLE)” in a given period, for instance, “one-hour-in-a-year. Skipping technical details, this means that future generation in the system will be adequate to serve forecast demand for all hours in the year, but one. Another name for this metric is “loss of load probability (LOLP)”. The above target in terms of LOLP will be roughly 0.01 percent. The uncertainty in demand forecasts and weather is generally ignored in generation optimization studies, to keep computational load within the grasp of available computers. For the traditional electric utility (handling the entire value chain), the above generation costs are usually lumped with T&D, metering and billing, and servicing costs to derive a system-wide total cost which is used to set tariff for different categories of consumers, sometimes distinguished by the costs they impose on the system but not always. The costs of maintaining reserve margin in the system is embedded in the above tariffs and is not charged separately. Thus, every customer shares this cost, a necessary evil you can say, irrespective of his contribution to this cost or the value he derives from the reliability of his supply. Historically, utilities have been managing this requirement amicably through regulated tariffs and approved prices for their contracts with IPPs. The above situation changes when the industry is unbundled, deregulated, and competition is introduced in its contestable functions. Among many other issues, a serious question arises about the provision and compensation of this reserve. It’s because not all generators are created equal, in their capital costs, production costs, and design and performance features. Some plants like nuclear and coal-fired are designed to serve round-the-clock base-load duties for their high capacity but low operating costs, not amenable to frequent cycling duties. On the other extreme, we have quick start and fast response combustion turbines with low capital but high operating costs, designed to serve only infrequent peak load duties (few hours only). And, then there are other plants that fall in between, both in cost and performance. Two additional aspects of this issue are worth noting. First, the above reliability (or unreliability) is at the aggregate level only. The reliability that consumers finally receive depends to some extent on the above target but involves additional factors that relate more to design, configuration, and operation of the T&D systems and their robustness against adverse weather. Second, the above reliability is built into the grid and delivered to customers irrespective of the actual value they assign to it. This has been true for much of this industry’ history and is beginning to change only lately with the advent of smart sensors and devices that can be used to channel and control the flow of electricity through the grid and manage demand at the consumer end. To maintain generation adequacy in the local system, National Electric Power Regulatory Authority (NEPRA) prescribes an LOLP target of one percent per year (or 88 hours per year) for National Transmission and Despatch Company (NTDC) for its long-term planning for the national power grid. This target is specified in clause PC4.1 of NEPRA Grid Code 2005. No further details are available, within the Code or elsewhere, as to how it was fixed, and whether it was set purely on technical grounds or with some consideration to its financial implications (cost of the generation capacity or value of reliability to electricity consumers). The actual reserve margin in the NTDC system in FY2019 was 37 percent which, according to NTDC’s latest draft Indicative Generation Capacity Expansion Plan (“IGCEP2047”) is based on NEPRA’s prescribed LOLP standard of one percent per year, and is envisaged to be around 67 percent in FY2025 and further growing to 74 percent in FY2030. The IGCEP2047 Report attributes this huge reserve margin requirement largely to cover the “intermittency” and “variability” of the government’s target of 20 and 30 percent renewable generation by these years, respectively. The new capacity requirements to maintain sufficient generation resource adequacy in the power grid are envisaged to be over 50 GW by 2030, and estimated to require USD 70 billion of new capital investment. The above capacity and the associated capital investment needs are huge, and should force both our policy- and decision-makers to lose sleep. It is particularly important to revisit our whole approach to power sector planning and development since the present approach is mostly based on a static view of the future, assuming standard plant availabilities, ignoring uncertainties in demand forecasts, overlooking weather and climate influences on supply and demand, and not to mention the dynamics of energy market which are radically changing society’s choices for power generation sources and technologies. Our assumptions that go into future planning need a serious re-thinking, our criteria for resource adequacy need critical re-examining, making it essential for us to configure our future generation portfolio that is least taxing on the nation’s meager financial resources, is robust against disruptive market forces, and yet is sufficiently flexible to permit adoption as the future unfolds with minimal risk of leaving the invested capital stranded. The writer is a freelance consultant, specializing in sustainable energy and power system planning and development.