Seismic petrophysics: Part 1

From SEG Wiki
Jump to: navigation, search

We never seem to have enough data to analyze the complexity of the subsurface. The geologist would love to have hundreds of cores, but the manager will let him have just one. The geophysicist's dream is to acquire a new 3D with long offsets, complete azimuth coverage, and 5-m bins, but she is more likely to be stuck with a couple of 2D lines.

The idea in this tutorial is to show how to create new data starting from what is available so we can make deductions, analyze alternative scenarios, and decide whether a certain variation in lithologic properties would have an impact on seismic. Specifically, I will show how to create new well-log data. In doing so, I hope to demonstrate how easily this kind of work can be done by all practitioners who dislike “black-box” approaches.

The goal is to create an abstraction of the reservoir rock under study to investigate alternative and plausible scenarios, e.g., a change in pore fluids or in some lithologic property such as porosity. The workflows I will show are:

  • Import well-log data and create a lithology curve log.
  • Augment data through fluid-replacement modeling.
  • Describe the reservoir through simple statistics.
  • Create synthetic data sets that represent different scenarios.

There is also some data-loading and data-management overhead to take care of, so we will focus only on the first item in this article.

All Python code and data used here is available at GitHub. Please take a minute to look at them and perhaps follow along by running the code yourself.

Import well-log data[edit]

To handle well-log data, I use a Python library called Pandas, which makes it very easy to manage and inspect large, complex data sets. For example, we can load a sample data set with[1]

>>> import pandas as pd

>>> logs = pd.read_csv(‘qsiwell2.csv’)

The term logs refers to a DataFrame, a “container” we can use for all sort of things, such as investigating the depth range or average velocity:

>>> print “depth min {0:.2f} m”.format(logs.DEPTH.min())

>>> print “depth max {0:.2f} m”.format(logs.DEPTH.max())

>>> print “Vp average {0:.2f} m/s”.format(logs.VP.mean()) depth min 2013.25 m depth max 2640.53 m Vp average 2977.10 m/s

Pandas also allows us to have a quick glance at all the logs loaded by simply calling logs.describe(), which will output a table such as this:

Stats <math>V_{P}</math> <math>V_{S}</math> rho phi
count 4117 4117 2701 2701
mean 2977 1371 2.22 0.29
std 449 297 0.06 0.03
min 1440 689 2.04 0.11
25% 2594 1141 2.18 0.28
50% 3036 1415 2.22 0.30
75% 3251 1595 2.26 0.31
max 4431 2428 2.54 0.38

Create a lithofluid-class log[edit]

What I do first is calculate a lithofluid-class log (LFC) in which I separate groups of data identified by similar lithologic and/or pore-fluid content. The values of the LFC log will be assigned following these rules:

0: Undefined 1: Brine sand 2: Oil sand 3: Gas sand 4: Shale

First I need to create the “flag” logs brine_sand, oil_sand, gas_sand and shale (these are logs made of 1 or 0, i.e., True or False in Python) using cutoff values on <math>V_{sh}</math> (shale volume) and <math>S_{w}</math> (water saturation):

>>> sand_cutoff = 0.20

>>> brine_sand = ((logs.VSH <= sand_cutoff) & (logs.SW >= 0.9))

>>> oil_sand = ((logs.VSH <= sand_cutoff) & (logs.SW < 0.9))

>>> shale = (logs.VSH > sand_cutoff)

Notice that we cannot define gas_sand just yet (occurrence of gas sands) because the in situ log did not record any gas sand. We will deal with that in the next tutorial, on fluid replacement.

These flag logs are then used to create the LFC log:

>>> import numpy as np

>>> temp_lfc = np.zeros(np.shape(logs.VSH))

>>> temp_lfc[brine_sand.values] = 1

>>> temp_lfc[oil_sand.values] = 2

>>> temp_lfc[shale.values] = 4

>>> logs[‘LFC'] = temp_lfc

To see how many samples each facies is made of:

>>> string = “brine sst={0}, oil sst={1}, shale={2}”

>>> data = (np.count_nonzero(brine_sand), np.count_nonzero(oil_sand), np.count_nonzero(shale))

>>> print string.format(*data) brine sst=706, oil sst=134, shale=1128

Pandas also allows us to do quick graphical inspection of the data. For example, Figure 1 shows the way to draw <math>I_{P}</math> histograms for all the available classes.

logs.IP.hist(bins=50, by=logs.LFC)

For a comprehensive look at the data available, we plot everything in one panel, applying a color convention for the lithofluid classes (Figure 2).

The same data are shown in a crossplot domain that should be familiar to geophysicists, i.e., acoustic impedance (<math>I_{P}</math>) versus <math>\frac{V_{P}}{V_{S}}</math> (Figure 3). The colors are the same as in Figure 2, but what about that red? It indicates gas sands — but so far, we don't have gas sands in here. In the next tutorial, I will show how to synthesize gas sands and create an additional gas_sand class.

A look ahead[edit]

In this tutorial, we have laid the foundations for the real work. In * Part 2, we will look at applying Gassmann's equation to our logs to perform fluid-replacement modeling (FRM). Starting with the original data we have looked at in this tutorial, we will continue our work and replace oil with brine, then transform all the fluid to oil and to gas (Figure 4). This will let us examine the relationship between impedance and <math>\frac{V_{P}}{V_{S}}</math> with the new, synthetic data. From there, we can do some statistical cleaning and analysis for a complete interpretation.

See also[edit]


  1. Avseth, P., T. Mukerji, and G. Mavko, 2005, Quantitative seismic interpretation: Applying rock physics rules to reduce interpretation risk: Cambridge University Press. CrossRef

External links[edit]

find literature about
Seismic petrophysics: Part 1
SEG button search.png Datapages button.png GeoScienceWorld button.png OnePetro button.png Schlumberger button.png Google button.png AGI button.png