The oil and gas industry is being pushed by regulators, third parties and investors to better identify and mitigate its methane emissions, particularly the few “super-emitting” sites that make disproportionate contributions to global emissions. But while operators are ramping up capital spending on new technologies, one thing has become clear: There is no silver bullet when it comes to reducing emissions, and each option comes with one or more drawbacks, such as attribution of the source, costs, quantification and detection limits. . In today’s RBN blog, we’ll break down the pros and cons of different measurement technologies.
In Part 1 in this series we discussed how operators can begin to address key environmental goals while protecting, even improving, their bottom line. We also explained why the push to reduce methane emissions is so urgent. In Part 2we detailed the growing external pressures to control methane emissions and how the industry’s regulatory outlook, its quasi-nongovernmental oversight, and its access to capital are changing in ways that make understanding data from sometimes inconsistent emissions is of vital importance.
Discussions of the technology needed to detect methane emissions tend to include a long list of options with all possible combinations of instrument, approach, and analysis, but we think it can be simplified into two key metrics: (1) distance between the measuring device and the emission source and (2) the measuring frequency. The greater the distance between a measuring device and the emissions source (vertical axis in Figure 1), the greater a leak must be to be detected by monitoring technology. Conversely, the cost per site is generally much higher for hardware deployed on-site relative to aerial measurements farther from the source. Aircraft tracking typically costs $10,000 to $15,000 a day, for example, but an operator with enough scale and concentration of assets can visit hundreds of sites a day in regions like the Permian Basin.