Smoothing surfaces and attributes
|This tutorial originally appeared as a featured article in The Leading Edge in February 2014 — see issue.|
Several years ago I wrote an article about smoothing (Hall 2007). At the time I was aware of but unversed in the ways of reproducible science and open source software. So I offered to share my code, but it wasn’t online. And it wasn’t really code — it was a bunch of Landmark PowerCalculator scripts. My dataset was proprietary. None of what I’d shown was practically reproducible by anyone. This time it will be different (this is the first article in the TLE series).
Seismic horizons, whether of the time, depth, or attribute variety, are just big tables of numbers. Think of them as arrays (a bit like a spreadsheet full of cells containing numbers), or even as images (where the cells are pixels). Indeed, we can be even more general — all data can be thought of this way. Logs are 1D arrays, and seismic volumes are 3D arrays.
Like all subsurface data, these arrays or images contain signal and noise. Sometimes they are so noisy we want to do something about it. Broadly speaking, there are two approaches to noise removal: linear and nonlinear filters. Linear filters treat all pixels the same. Often they mix the noise with the signal. This is how stacking works — we average the traces across offset angles. The assumption is that the noise cancels itself, because it is random and the signal is not. Nonlinear filters treat pixels differently. Most methods assume some spectral characteristic of the signal (usually that it is spatially smoother than the noise, which is the same as saying that it has a smaller wavelength in the wavenumber domain), and use it to discriminate between signal and noise.
Linear filters are usually implemented by convolution. Perhaps the simplest kind of convolution is a moving average. Imagine taking the moving average of a well log, with seven samples going into each mean. The seven-sample group is called a kernel. In the case of a mean, all the sample weights are equal — we sum the log samples in the neighbourhood at each stop and divide by seven. Sometimes we might increase the weight of the central value at each stop, and decrease the weight at the edges, remembering to ensure the weights still sum to one.
A 2D kernel is easy to make in Python, using the math library NumPy:
import numpy kernel = numpy.array([[ 0.111111, 0.111111, 0.111111], [ 0.111111, 0.111111, 0.111111], [ 0.111111, 0.111111, 0.111111]])
Once I have a horizon array to operate on — shown here (without noise) and detailed below with noise — convolution can be done using another library, SciPy:
import scipy.signal output = scipy.signal.convolve2d(noisy_horizon, kernel)
A simple plotting command displays a cross-section of the result:
Then I can easily play with other kernels that might do other tasks, such as edge detection:
I will look more closely at nonlinear filters in a future tutorial. Because they cannot be implemented by a simple convolution, they are harder to implement than linear filters. A great many have been implemented in various image processing libraries for Python (such as PIL, Pillow, mahotas, and ScipPy), but we might also like to build our own.
- Hall, M (2007). Smooth operator: smoothing seismic horizons and attributes. The Leading Edge 26 (1), January 2007, p16-20. http://dx.doi.org/10.1190/1.2431821