While oil prices have risen in recent months, they are a far cry from the $100/bbl prices of two and half years ago, and there is certainly no guarantee they won’t fall back below $50. In other words, the survival of exploration and production companies continues to depend on razor-thin margins, and E&Ps must continue to pay very close attention to their capital and operating costs. Lease operating expenses—the costs incurred by an operator to keep production flowing after the initial cost of drilling and completing a well have been incurred—are a go-to cost component in assessing the financial health of E&Ps. But there’s a lot more to LOEs than meets the eye, and understanding them in detail is as important now as ever. Today we continue our series on a little-explored but important factor in assessing oil and gas production costs. (Read the full RBN Energy post here -- a subscription to RBN Energy is required to read the full post.)
With today's low crude oil and natural gas prices, the survival of exploration and production companies depends on razor-thin margins. Lease operating expenses--the costs incurred by an operator to keep production flowing after the initial cost of drilling and completing a well have been incurred--are a go-to variable in assessing the financial health of E&Ps. But it's not enough for investors and analysts to pull LOE line items from Securities and Exchange Commission filings to find the lowest cost producers, plays, or basins. More than ever we need to understand-really, truly, deeply-what LOEs are, why they matter, how they change with commodity prices, production volumes, and other factors, and how we should use them when comparing players and plays.
We recently paired up with RBN Energy to tackle these issues in a blog series starting with the first post, "LOE-down - Understanding Lease Operating Expenses and How They Drive Production." Based on RBN's subscription model, there may be accessibility issues, but if you cannot access it, rest assured that we will post these articles directly to the DDI website within a month of the initial post to the RBN Energy website. And also rest assured that we will continue to post original and meaningful content directly to the DDI blog on a weekly basis.
This week we continue our discussion of probabilistic reserve estimates by taking a look at some of the most important properties of the most common distribution used in probabilistic reserves estimates: The lognormal distribution.
The following is a lognormal distribution meant to represent the full range of possible outcomes (recovered reserve quantities) and the corresponding probability for each outcome. The X-axis shows the magnitude of recovery (barrels recovered) at each point. The Y-axis shows the likelihood of the outcome for each point. The P10 (the "conservative" estimate) is at the 100-million-barrel mark, meaning 10% of the possible outcomes are to the left of this point. The P90 value (the "optimistic" estimate) is at the 1-billion-barrel mark, meaning 90% of the possible outcomes are to the left of this point. The black line in the middle is the P50 value (the "best" estimate), which is also, by definition, the median. The red line is the mean, and the blue line touching the peak is the mode.
The first thing to notice is how small the numbers are on the Y-axis. Don’t let this bother you. The important thing is that--no matter what--the area under the curve will always (and must always) add up to one (i.e. 100%). The curve is created with the idea in mind that it will represent all possible outcomes, which is to say 100% of the outcomes. As a result, the numbers on the Y-axis transform themselves to be whatever they need to be to make sure the area under the curve adds up to exactly one. If, for example, we decided to break down this same distribution into a histogram with 50 bins, we would get the following:
The Y-axis scale on the left remains unchanged because here it is being used to represent the continuous distribution; however, the Y-axis scale on the right, which is being used for the histogram representation, has increased dramatically in scale. These higher values make up the fact that we are effectively acting as though there are fewer "outcomes" by lumping all the outcomes from a single range into one of the bins. Since there are now fewer "outcomes" (or columns, units, etc.--however you want to think about it) to add together, the area represented by the columns still adds up to precisely 1 (i.e. 100%). In other words, don't worry too much about the absolute values on the Y-axis scale. What matters are the areas between outcomes.
Back to Figure 1. Notice how, unlike a normal distribution, this lognormal distribution has different values for the mean, median, and mode. Further, the mean is always larger than the median, and the median is always larger than the mode.
If this doesn’t strike you as important, it should. Below we have a similar lognormal distribution where we have the same P10, P50 (median), and P90 values, but where we have adjusted the other parameters to increase the mean from 400 million barrels to 600 million barrels (a 50% increase), making it nearly twice as large as the median. If the expected value for an investment increased 50%--even with everything else remaining the same--you would want to know, wouldn't you?
The median, remember, is the same as the P50 estimate (the so-called "best" estimate). This is the "proved plus probable" case that is "as likely as not to be exceeded." And yet the mean is the "expected value” meaning the average value per outcome that we should expect to achieve from repeat trials. How can these two numbers be so wildly different?
To answer this question, we need to understand the difference between “the average outcome” and “the average value per outcome.” While this sounds like semantics, the consequences are substantial.
The Average Outcome
Here we have converted Figure 1 again into a histogram representing the outcomes from 1000 different “trials” drawn from the distribution.
Each column represents the number of outcomes that fell within the range represented by the width of the column. The tallest column, for example, “captured” 104 of the outcomes from the 1000 trials, and you can see that the column’s height corresponds to the 104-mark on the right-hand-side axis. The blue columns represent the outcomes to the left of the median, while the red columns represent the outcomes to the right of the median.
By adding up the outcomes from each of the seven blue columns to the left of the median, we can see that the number of outcomes tallies to exactly 500.
Similarly, the outcomes represented by the red columns to the right of the median also sum up to 500 outcomes, together covering all of the 1000 trials.
From this perspective, we might think of the outcome represented by line between the blue and red columns as “the average outcome.” For instance, consider the very last column to the right. On the one hand, this column represents outcomes having nearly 1.8 billion barrels in reserves, but on the other hand, it represents just one outcome. To take a specific example, from the perspective of the P50 (or median value) the 1.8-billion-barrel outcome is no more special or significant than one of the 24-million-barrel outcomes in the first blue column on the far left. If we say each outcome is equal with all other outcomes by virtue of being just one outcome, then it is fair to say that the median is “the average outcome.”
The Average Of The Outcomes
Contrast this with a case where the magnitude of the outcome does matter. In such cases we often want to know “the average value per outcome.” Here, since the value (or magnitude) of an outcome is represented by the distance along the X-axis, we can use a more intuitive parallel from basic physics to understand what's going on.
When you balance a shape on your finger, what matters is not just how much mass is on either side of your finger, but also how far away from the balancing point the mass is located.
Likewise, the mean for a distribution takes into account the magnitude of the outcomes (or the distance along the X-axis away from the mean). The mean, therefore, becomes the visual "balancing point" for the distribution. Whereas the median is the point where you would cut the distribution in half to get equal areas on both sides, the mean as the point where you would balance the distribution on your finger to keep it from toppling over to the left or to the right. We created the figure below to illustrate this point. The stacked squares that make up this quasi-lognormal distribution are blue to the left of the median and red to the right of the median, yet if you weight each square by its distance from the balancing point (the mean), the balance of forces is equal on both sides.
With a true lognormal distribution, the tail to the right extends out to infinity, making it possible to have even more extreme differences between the median and the mean values.
Next week we will tie this together with some specific reserves examples and show how these estimates can change shape and converge to a single point over time and as more data is collected.
One might expect reserves and PV10 estimates to be intuitive and easy for investors to understand. After all, these are the measures endorsed by the SEC (with “Standardized Measure” in place of “PV10”) and international securities regulators to help investors understand what they’re getting themselves into. But the unfortunate reality is that these figures were designed for accuracy and internal consistency, not intuitiveness and simplicity. The SEC decided to put investors through a maze that might leave them seeing the world upside down and backwards in hopes that the gains of greater precision would more than offset the pains of lost clarity. In our experience, this has not been the case. More than once we have seen investors make decisions they came to regret after exiting the reserves reporting maze with an upside down and backwards perspective.
Deterministic vs Probabilistic Reserves Estimates
To start us off, we need to recognize that there are two overarching ways to estimate reserves: Deterministic and Probabilistic. The deterministic method is the older and more popular of the two. The probabilistic approach is the new kid on the block: less popular but an up-and-comer. Academics hail the probabilistic approach as the “enlightened” approach destined to overthrow the outdated deterministic methods.
Despite the ongoing debate, these two methods are closely related, such that understanding one helps in understanding of the other. Consequently, we’ll take a look at both methods before diving into the specific applications that can help investors.
Despite the academic rhetoric, deterministic methods are not necessarily inferior to probabilistic methods. Much depends on the context. Deterministic methods use the best-guess estimate for each input that goes into the reserves estimation formula being used. The formula itself can vary depending on the stage of development and what data is available. Wells with lots of production history will use Arp’s formula. Wells that haven’t been tested will use the volumetric formula. And wells with shut-in pressure tests may use material balance formulas. In any one of these cases, the estimated reserves will be called a “deterministic” estimate so long as one value for each input is fed into the equation.
This approach can still be used to produce estimates with varying levels of certainty, such as 1P (proved), 2P (proved + probable), and 3P (proved + probable + possible) reserves. But even here, we use just one value for each input in each scenario. To get the conservative (or “proved”) estimate of total reserves, a single conservative estimate is made for each of the inputs, which then go into the equation (whether Arp’s, volumetric, or material balance) and produce the conservative estimate. This same process is repeated for the “best” estimate (“proved” + “probable”) and the optimistic estimate (“proved” + “probable” + “possible”) as shown in Figure 1.
Those who oppose this method do so because the level of certainty assigned to each estimate tends to be more subjective than the level of uncertainty that can be established using probabilistic methods. An engineer can say, “I feel a high degree of confidence that the average porosity is going to be 6 percent or more,” but we can’t say numerically what a “high degree of confidence” means. Is it a 90% confidence estimate or an 89.99% confidence estimate? We just don’t know. And while the engineer could say, “When I say ‘a high degree of confidence’ I mean a 90 percent confidence level,” without probabilistic methods, we just don’t have any kind of numerical framework to help us explain why this is a 90% confidence level and not 89% or 91%--or if we wanted the 91% estimate, what that would be. As we’ll see, probabilistic methods impose this kind of numerical framework on the uncertainty assessment such that we can move easily from one estimate to another.
Probabilistic estimates are calculated by assigning a probability distribution to each input that goes into the equation. Suppose, for instance, that a reservoir engineer comes up with a high, middle, and low estimate for porosity, and that there’s also empirical data showing that porosity tends to be normally distributed. The engineer can then come up with a “best fit” distribution with parameters that allow it to hit these low, middle, and high estimates while satisfying the basic requirements of a normal distribution. If the engineer can’t reconcile these estimates with the empirically determined distribution, it tells the engineer that there is a logical inconsistency between the estimates and broader set of empirical data--thus new estimates or a new distribution are required. Once a distribution has been selected for each input, a Monte Carlo simulation (or a similar type of simulation) where the calculation is performed thousands of time (called “trials”) with a single “sample” plucked from each input distribution for each sample. The result of all these “trials” is a distribution of possible outcomes for the amount of reserves we can expect. From this, the engineer can then come up with an estimate of reserves for any confidence level, whether 10%, 90%, or 97.8152%. Figure 2 shows this general process.
Bringing It All Back Together
As we said earlier, while there are clear advantages to using probabilistic methods, most of the publicly traded E&P companies use deterministic methods for reporting purposes. Most likely, this is to avoid shareholder lawsuits. Since these companies don’t know how historical precedents would be applied to probabilistic estimates, they would rather keep using deterministic methods instead of opening themselves up to becoming a “test case” on these issues. In other words, nobody wants to be the guinea pig.
Nonetheless, there are reasons to suspect that the “conservative”, “best”, and “optimistic” reserve estimates produced by deterministic calculations will increasingly converge to the P10, P50, and P90 estimates produced by probabilistic estimates.
First, due to the benefits of using probabilistic methods, the companies themselves are increasingly choosing to use probabilistic methods for their own internal purposes. Exxon Mobile, for example, uses probabilistic methods to guide its internal exploration programs, while continuing to use deterministic methods for reporting.
There are also concerted efforts coming from the Society of Petroleum Engineers (SPE) and academia to move increasingly towards probabilistic methods. And through the influence of these groups, the regulatory agencies are also starting to move in that direction. Before the 2008/2009 SEC’s rules modernization, deterministic methods were clearly favored, while probabilistic methods were not even mentioned--some felt that probabilistic methods were even being actively discouraged by the SEC. Now, after the modernization, probabilistic methods have been explicitly added as an acceptable way to compute reserves. The trend appears to continually be moving in this direction.
Who Cares? (Our favorite question)
For starters, probabilistic methods provide a powerful framework for seeing how reserve estimates with different confidence levels (“proved”, “probable”, and “possible”) are connected. Add to this that the fact that many factors appear to be moving deterministic estimates towards convergence with probabilistic methods, and it seems quite clear that there is much to gain for E&P investors by improving their understanding of how probabilistic methods work and how to interpret their results--thereby identifying opportunities and avoiding pitfalls along the way.
In Part 2, we will dive into this issue: How to understand probabilistic reserves estimates. Stay tuned.
According to a recent WSJ article, the market narrative du jour making the rounds among energy traders is this: "OPEC has dramatically increased production ahead the meeting scheduled for later this month, and so therefore they must not really be serious about making any significant production cuts."
While it's possible there won't be meaningful cuts, the narrative has it all backwards.
We need to look at the world through the eyes of the OPEC nations. For starters, their "quotas" are (and have been for many years) based on their reported reserves. As a result, their reported reserves are inflated and notoriously unreliable. This is no secret, and so each OPEC nation has to find other ways to make the case that it should get a larger piece of the total production pie than the other OPEC nations.
A great way to do this is by maxing out current production while pretending (bluffing) that it's not even your full capacity. This is a great way for an OPEC nation to make it look like it's giving an even greater concession vis a vis the other OPEC nations. Of course, they all know this is B.S., but that doesn't mean it's not happening. The reasons may actually be quite complex psychologically. For instance, if those in power can convince themselves that they are sacrificing the greatest amount they will bargain more aggressively, in which case they might want to rationalize and validate this view for themselves.
At the same time, all the talk about production cuts has caused some of the consuming nations to adopt a quasi-hoarding mentality, stockpiling as much cheap crude as they can before prices go up.
The good news is that these developments show OPEC and consuming nations taking actions consistent with a view that there will be real, serious discussions genuinely striving towards a meaningful production cut. The increase in production from OPEC nations isn't OPEC saying "Were aren't going to cut." It's them saying "There will be a cut, and so we better do whatever we can to make sure we don't get the short end of the stick." Russia is doing the same to bolster its bargaining position. The stockpiling nations, on the other hand, are saying "We think there's going to be a cut and we don't want to get blindsided by this," which--because they have been preparing for this--means they will be less likely to interfere with or object to OPEC's efforts. The more prepared we are for something, the less we tend to worry about it.
We won't go so far as to say this unquestionably what's going on. But in our view it's more likely than the narrative du jour, and it deserves at least as much consideration by investors and analysts.
In a WSJ article yesterday, the Journal conveyed concerns of a bubble in prices being paid to lease acreage in the Permian Basin, with some companies paying as much as $40,000/acre.
Could there really be a bubble? Absolutely. But, at the same time, even if there were a temporary bubble, there could still be some underlying truth to what's going on, even if it ends up getting out of hand. The Internet, after all, did turn out to be a pretty big deal, despite the fact that there was a dotcom bubble along the way.
Consider the figure below from the Society of Petroleum Engineers (SPE) Petroleum Resource Management System (PRMS). This is the most widely accepted international framework for guiding the classification and categorization of oil and gas reserves and resources. And while there are important differences between the current SEC requirements and the PRMS framework, the current SEC rules do in fact try their best to approximate the PRMS framework (as much as they can without undercutting previous precedents and other considerations).
Note, first, that reserves are only one part of the story. There are also contingent resources, prospective resources, and so-called "unrecoverable" resources.
A quick explanation of the figure is in order:
Note that the figure is not to scale. If it were, the non-reserves categories would actually be much larger than they are in the figure. Even the "proved" reserves (represented by the far left portion of the green "reserves" section) are a mere fraction of the total reserves.
If we use the PRMS framework as the lens through which we look at a resource-rich (as opposed to reserves-rich) basin such as the Permian, and we combine this with the perspective we get from DDI's Shale Enlightenment thesis (leading to more and more "resources" in these basins), a picture begins to emerge in which the prices being paid today to lock-in Permian acreage might not sound quite so crazy.
The benefit of locking-in acreage within one of the a premier sedimentary basin (remember from Basin Basics that all oil and gas produced in the past, present, and future comes from the same sedimentary basins), first occurred to us after Warren Buffett-loving value investors asked us “What would represent an economic moat for an E&P company?”
Among value investors, strong brands such as Coca Cola are seen as providing a powerful economic moat because they give the company a semi-captive consumer base that will voluntarily pay more for their products, thereby allowing them to maintain the most attractive margins within their sector.
Since oil and gas are commodities, there can be no branding. Instead, an economic moat can be established by "holding captive" the other side of the supply-demand equation. Having a premier acreage position in a major sedimentary basin (where, under the Shale Enlightenment thesis, more and more resources are likely to be discovered) is akin to having a captive, low-cost resource base.
While there is certainly much more to say on this subject, for brevity’s sake, we’ll stop here for the day and pick up with this thread at a later date. For now this should give us plenty to chew on.
Recently, an investor looking at midstream MLP opportunities asked us, “How long do pipelines last?” This investor wanted to know whether a new pipeline would require large capital outlays to overhaul or maintain the pipeline within a short enough time frame in influence his investment analysis.
Pipelines can last for a very, very long time. There are many still in operation today that are more than 50 years old. And, notably, the design standards from 50 years ago were not nearly as robust as they are today. The EPA didn’t even exist 50 years ago!
The Minimum Federal Safety Standards for pipelines (which has a large impact on how long pipelines are designed to last) are published in 49 CFR Part 192 of the U.S. Code of Federal Regulations.
It’s likely that no one could really say exactly what’s the shortest time frame within which a pipeline could experience an unanticipated rupture or other failure. But from the perspective of an investor in such an asset, the important point is that such failures would not be system-wide catastrophic failures requiring expensive overhauls. There might be fines or penalties associated with environmental impacts (perhaps insured against?), but the cost of the repairs themselves would be very small in comparison to the cost of building a pipeline from scratch.
The principal threat to a pipeline's integrity is in the form of corrosion. This can come from either the inside or the outside of the pipeline. Brine (salty) water can build up on the outside and cause corrosion. The fluids on the inside (whether oil or natural gas) almost always contain some amount of water, sulfur, carbon dioxide, and hydrogen sulfide, all of which are corrosive. This is one of the reasons why pipeline operators have specifications for the oil and natural gas that they are willing to accept into their pipeline systems.
(This is also one of the reasons why certain mixtures of oil and natural gas are more valuable than others. "Sour" oil or natural gas, for instance, is almost always less valuable than "Sweet" oil or natural gas because it contains a high amount of sulfur (and the associated rotten egg smell is why it got the name "sour"). Refiners and pipeline shippers demand reduced prices for "sour" oil and gas because the corrosive components mean these oil and gas streams have to undergo additional treatment to meet transportation specifications or to avoid wreaking havoc on refining equipment. No one really cares about the smell; it’s the corrosion that’s the problem.)
The other way pipeline operators protect their pipelines against corrosion (aside from their specifications) is by installing what’s called a “cathodic protection system.” This is generally required under current regulations. A good example of this is where a company will drive very long metal rods down deep into the ground every few hundred feet along the length of the pipeline. These are then and connected to the metal surface of the pipeline to serve as a “sacrificial metal.” As a result, whatever corrosion does occur, it takes place on the connected metal that is easier to monitor and replace.
There are also very effective means for monitoring pipeline integrity over time. The gold standard for this kind of monitoring is a device called a “smart pig.” As funny as it sounds, that is in fact the technical name. A regular “pig” is like a large cork or bullet that you push through a pipeline (using fluid on the backside to push it) every once in a while when you want to make sure that the inside of the pipeline stays nice and clean (no buildup of residue). It works like a squeegee, scraping along the insides of the pipeline. Think of how the “plunger” piece of a syringe moves along the inside.
Smart pigs are basically the same as regular pigs, but they have all kinds of sensors and instruments added that allow them to assess the integrity of the pipeline along the entire length of the pipeline.
In short, there are many reasons to look at a pipeline asset as a very, very long-lived asset requiring relatively little upkeep within the first several decades in terms of how the required costs would compare to the initial cost of constructing a pipeline.
As always, feel free to shoot us an email if you want any additional clarification or if you’d like us to connect you with a pipeline design engineer that could provide a more thorough, in depth discussion.
Recently I spoke with an investor who was concerned that the reserves required to support oil and gas pipeline economics would not last long enough to justify the investment in the added infrastructure. This conversation reminded me of how the questions of reserves, "How much are left?" and "Where will new reserves be found?" (both relevant to pipeline economics) are intimately tied to the concept of "sedimentary basins." Since I suspect many oil and gas investors are not as familiar with this big-picture concept as they would like, I thought I would give a quick overview:
All of the oil and natural gas that has ever been produced, and all that will be produced in the future will most likely (with, say, a 95% probability) come from the limited number of sedimentary basins that have already been identified throughout the world. Oil and natural gas only form, accumulate, and exist (for that matter) in sedimentary basins. Notice in the map below how all of the "new" shale plays (red) are located within the basins (green). These are the same basins where we found all the old conventional stuff, too.
The way these basins work is that various types of sedimentary rocks called "source rocks" (which are virtually all shales) get baked at high pressures and temperatures in the deepest parts of the sedimentary basins, converting the solid organic matter (known as kerogen) into oil and natural gas. The more it gets baked, the more "gassy" the mixture is. The less baked, the more "oily" the mixture. The oil and gas migrate out of the source rocks into all kinds of different configurations of other sedimentary rocks (often sandstones). These other sedimentary rocks become the reservoirs from which conventional oil and gas methods would be applied to extract oil and natural gas. Now, with the "shale revolution" we are actually targeting many of the source rocks directly. Regardless of the approach, this all happens (and only happens) within sedimentary basins.
You might be tempted to say, "Well, if we're now tapping the source rocks, how do we know the reserves are going to last? Doesn't this just prove we're scraping the bottom of the barrel?"
Here's the crazy thing: The source rocks have often only expelled a tiny fraction of their hydrocarbons. What's more, even in the conventional reservoirs (sandstones, etc.), only a fraction of the oil and gas has been recovered. In the industry, we talk about this issue as a matter of "recovery efficiency," or sometimes as the "recovery factor" for a particular reservoir. So-called "primary recovery" typically only gets 30% of the oil. Then more complex projects for "secondary recovery" and "tertiary recovery" are implemented to recover an incremental 20% or so. Getting these incremental amounts out of the reservoir is often just a matter of the current price of oil.
The basic point is this: For every drop of oil that has ever been produced out of these sedimentary basins (from the beginning of human civilization), there is at least as much or more still remaining. Whether that oil will be produced depends more on the incremental cost of producing that oil in relation to other alternatives.
From this perspective, you could argue that a pipeline connecting a major basin to a major market hub (such as one that connects the Appalachian basin to the storage facilities in Cushing Oklahoma) is going to have some kind of use for a very, very long time. Nonetheless, there is still the matter of how much current capacity exists in relation to the future flow rates that will be required to meet future demand, etc. But this should at least give investors some perspective to assess what the long-term value of a pipeline might be.
A recent WSJ article attempts to explain the resiliency of U.S. shale production and emphasizes the unique flexibility of U.S. shale resources.
While the article does a good job, I would like to elaborate on a few points:
First, while it is true that U.S. shale resources were originally seen as the marginal source of production after the 2014 price collapse (implying that shale production would come to a halt), and that this original perception turned out to be wrong. In truth, it's not really appropriate to talk about whether or not "U.S. shale" as a whole is or isn't the marginal source of production. That's like saying "T-shirts are brown." Obviously, some are and some aren't--there are a lot of different T-shirts out there.
Likewise, there is a massive quantity and a wide variety of shale resources in the United States. Some are marginal, while others are not. To go even further, those that are far away from the economic margin can be either highly economic (such as the Apache's recent "Alpine High" discovery) or incredibly uneconomic (such as the Lower Huron shale in southern West Virginia). It all depends on which shale you're talking about, where in that shale is being targeted, what methods are being used, and so on.
Also (and relatedly) it's worth pointing out that the flexibility U.S. shales emphasize in the article is not strictly the result of U.S. bankruptcy laws, but the combination of U.S. bankruptcy laws and spectrum of economic variability of the underlying shale resources. Not only do U.S. bankruptcy laws allow filing operators to continue their operations (as the WSJ article mentions), but they also facilitate the transfer of the highest-quality resources from the hands of companies unable to finance the development of these resources to the hands of those that can.
In some respects, it is almost like a card game where every time a player loses the remaining players get to take their pick of the losers cards. Invariably the loser will have a mix of good and bad cards. Since the remaining players are able to pick up the good cards (keeping them in play) while leaving the bad cards behind, the net effect is an overall increase in the quality of the remaining players' hands.
As in the card game, the "reshuffling" that happens with bankruptcy filings leads to more of the remaining companies having higher-quality resource portfolios, allowing them to continue producing even in times of distress.
One of the most important things I learned while studying electromagnetism as an undergraduate was that the forces of nature are inescapably linked to the geometry of their surroundings.
For example, a satellite dish can send information over very long distances because the signal moves together in parallel lines (they stay together in a tight "beam"). A radio antenna, on the other hand, cannot transmit signals very far because the signal spreads out in three dimensions as it moves away from the antenna.
In between these two are phenomena like radiation fading away from a power line. The fact that the power line is a line (a significant geometric feature) means that the signals interfere in ways up and down the line that prevent it from diminishing in directions parallel to the line. This has the effect of cutting off one of the dimensions over which the radiation can spread.
An easy way to remember this is that the geometry of the source (the antenna) and the geometry of the signal are inverses of each other. The radio antenna is effectively a single point, which in geometry has no dimensions at all, and so the signal is unrestrained and spreads over the most dimensions possible (three dimensions). The power line is a line, which in geometry has one dimension, so the signal spreads over (three minus one) two dimensions. Lastly, the satellite dish is designed to mimic a geometric plane, which in geometry has two dimensions, so the signal spreads over (three minus two) one dimension--which means almost no spreading at all.
These same kinds of relationships between geometry and potency apply to fracking and producing oil and gas (and all physical processes).
Fracking, in a horizontal well, for instance, is much like the radiation moving away from a power line. The volumes of the fracking fluid are spread out in two dimensions. As we move further away from the well (to where it is effectively a point), it becomes more like a radio antenna, spreading out in three dimensions. This is one reason why environmentalists' concerns are misplaced when they focus on the subsurface spreading of fracking fluids. With the volumes spreading in three dimensions, they don't (and indeed they can't) get very far.
On the production side, the geometry is the same, but the direction of the flow is reversed. This is significant. The initial production is coming in from all three dimensions, which makes for high initial rates. Then, overtime, the drainage becomes increasingly limited to two dimensions, as the rate of production is more and more influenced by the fact that these shales are spread out like pancakes (or geometric planes).
To some extent, this is an oversimplification, but the overarching theme is spot-on and very useful as a framework for understanding what's going on with different types of well. It explains why shales have such high initial decline rates (because they are rapidly transitioning from three dimensional drainage to two dimensional drainage), why conventional production behaves differently (the change in the effective geometry is more gradual and less drastic), and why environmentalists don't need to be worried about direct contamination from underground fracking activities (if they are going to worry they should worry about spills happening at the surface while operators are handling the fluids).
So next time you're at a cocktail party, try this on and see if you can sound like a fracking genius.