Skip Nav Destination
Filter
Filter
Filter
Filter
Filter

Update search

Filter

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

### NARROW

Format

Subjects

Date

Availability

1-20 of 48

John C. Bancroft

Close
**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2015 SEG Annual Meeting, October 18–23, 2015

Paper Number: SEG-2015-5932503

Abstract

Summary Spatial resolution is associated with the temporal resolution, but mat be limited due to "diffraction" aperture or inaccurate velocities. Velocity errors occur when data are processed to a datum in violation of the hyperbolic assumption. These errors may be very small and are assumed to be negligible, especially with CMP processing. Prestack migrations gather data from many CMP gathers, and any relative velocity errors degrade the spatial resolution. We demonstrate this spatial resolution loss and recovery using a real 2D data set that contains faulting events. In addition, the resolution of the faults may be further focused, depending on their angle of obliquity to the 2D line. Introduction High spatial resolution data for a research project was acquired in the Hussar area of Alberta Canada. The sedimentary layers in the area are relatively horizontal, with a surface elevation that had a range of 100 m. A vertical component of the data was extracted and conventionally processes with a standard prestack migration. The results were typical of the area and displayed no faulting. The data were also processed to form common scatterpoint (CSP) gathers, prestack migration gathers that are formed without moveout correction, (Bancroft et.al. 1994 and 1998). Velocity analysis of each gather provides a unique velocity at each CMP location. Moveout correction, scaling, muting, and stacking produced a prestack migration that appeared to contain faulted structure. The 2D data was further analyzed to evaluate the obliquity of the fault planes, relative to the angle of the 2D line, by modifying the velocities. These results showed improved focusing of the fault planes, identifying the angles of obliquity. It should be noted that vertical displacement across the faults is very small, but the character of the reflection changes significantly across the fault as demonstrated in Figure 1a. Note the character change between CMP 536 at 636 at 2 sec. near the bottom of the figure. This data was processed to a maximum time of 4.0 sec. Figure 1b shows the same area, processed with a poststack finite difference migration to a maximum time of 2.0 secs.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2013 SEG Annual Meeting, September 22–27, 2013

Paper Number: SEG-2013-0658

Abstract

Summary The resolution of seismic data can be significantly improved after migration. This can be achieved with a simple trace deconvolution in areas with a simple geology such as a sedimentary basin, or a more complex deconvolution if the structure is complex. There are considerable objections to this process; some are identified and discussed, then reasons for its use are presented. Two examples of deconvolution after migration are presented.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2013 SEG Annual Meeting, September 22–27, 2013

Paper Number: SEG-2013-1135

Abstract

Summary Different acquisition geometries of the baseline and monitor seismic surveys produce different patterns of acquisition footprints. The resulting time lapse image shows the differences in artifacts, which may dominate the changes in the reflectivity model due to the production from or injection into the reservoirs. Synthetic data is used to show how different acquisition geometries between baseline and monitor surveys lead to different Kirchhoff migration artifacts for the same reflectivity model. The least squares prestack Kirchhoff migration (LSPSM) is performed separately on the baseline and monitor data to attenuate these effects and provide comparable high resolution images for both pre- and poststack time lapse studies. A joint least squares Kirchhoff prestack migration (LSPSM) of both baseline and monitor data is introduced which attenuates the migration artifacts and returns high resolution LSPSM and/or time lapse images.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2012 SEG Annual Meeting, November 4–9, 2012

Paper Number: SEG-2012-0754

Abstract

Summary Kirchhoff least squares prestack migration (LSPSM) attenuates acquisition artifacts resulted from the irregularities or sparseness in the seismic data sampling and improves the image resolution. This study shows that this improvement needs an accurate subsurface velocity information. It is shown that the improvement in the resolution of the resulted LSPSM, convergence rate of the least squares conjugate gradient (LSCG) iterations, and the ability of a good data reconstruction by LSPSM are the three factors that strongly depend on the accuracy of the background velocity and can be used as effective tools for ensuring the accuracy of the velocity model.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2012 SEG Annual Meeting, November 4–9, 2012

Paper Number: SEG-2012-1475

Abstract

Summary Many seismic datasets are recorded over geologic structures where lateral changes in the physical properties of the stratigraphic layers vary smoothly. For these situations, depth migration algorithms are not required and time migration imaging is known to provide a similar outcome and is more economic. In this paper, we discuss the implementation of the Full Waveform Inversion (FWI) algorithms for velocity inversion using Common Scatter Point (CSP) gathers. Since the formation of the CSP gathers are based on the Pre-Stack Kirchhoff Time Migration (PSTM), we reduce the computational effort commonly associated with depth migration.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2012 SEG Annual Meeting, November 4–9, 2012

Paper Number: SEG-2012-1568

Abstract

Summary The inversion process to recover rock properties is typically approximated with seismic migration that is a transpose process. This transpose process limits the frequency content that should be recovered. The lower and higher frequencies that are lost, can be recovered by following a migration with deconvolution. There is opposition to applying deconvolution after migration, and we review those objections and then present two arguments to validate this proposition. The improvement in resolution is illustrate using a simple single trace spiking deconvolution. We propose that additional improvements can be achieved using a more sophisticated deconvolution that incorporates the dip of an event.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2012 SEG Annual Meeting, November 4–9, 2012

Paper Number: SEG-2012-1549

Abstract

Summary new approach is presented for estimating the velocities of converted wave data that is based on prestack migration by equivalent offset to form common conversion point gathers. These gathers are used to form an initial estimate of the converted wave velocity V c , that can then be used for the full accurate process of equivalent offset migration of converted wave data. Equivalent offset common conversion point gathers are formed using the P-wave and S-wave velocities and the double-square-root equation. The formation of these gathers requires approximate values for the P-wave and S-wave velocities, but after their formation, accurate velocities can be picked and the prestack migration completed with moveout correction.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2011 SEG Annual Meeting, September 18–23, 2011

Paper Number: SEG-2011-3882

Abstract

ABSTRACT Equivalent Offset Migration (EOM) is based on the pre-stack Kirchhoff time migration (PSTM) method. It first maps the energies of the scatter points onto an intermediate Common Scatter Point (CSP) gathers, then after successfully applying a Normal Move Out (NMO) correction will output the migrated image. Assuming negligible lateral velocity gradient, the CSP data are sorted along the normal hyperbolic paths and serve as a useful tool for velocity inversion. The scatter point response below the dipping interface is a tilted hyperbola. Using the constructed wavefront we established the relationship between the tilted and normal hyperbolae. Similar relationship is obtained by simulation of CSP responses. We improved the focusing of the separated energy in the semblance plots by removing the tilt effects. As a result, the accuracy of migration velocity inversion enhanced and the focusing of output image of time migration are improved.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2010 SEG Annual Meeting, October 17–22, 2010

Paper Number: SEG-2010-2186

Abstract

INTRODUCTION Summary Nonlinear optimization methods (or inversion) were investigated for analyzing synthetic microseismic arrival times. Two direct search techniques, the genetic algorithm and pattern search, were used to find the layered-earth velocity values from P-wave arrival times from a simulated perforation shot. For locating microseismic hypocenters, the gradient-based Levenberg-Marquardt algorithm was used to invert reduced arrival times from borehole and surface receiver arrays. Both categories of nonlinear optimization method, direct search and gradient-based, were effective for inverting arrival times to the required model parameters. Our experience suggests that the direct search methods, in particular pattern search, are simpler and faster in this application, i.e., inverting microseismic arrival time data to obtain layer either velocities or hypocenter coordinates. Unknown model parameters in fitting geophysical survey results can be found by minimizing the misfit between observed and calculated arrival times using non-linear optimization schemes (generally called inversion techniques by geophysicists). The misfir or objective function to be minimized must be parameterized by an input vector of the variables to be found. Optimization techniques fall in two categories: gradient-based, and direct-search . An example of gradient-based techniques is the Levenberg-Marquardt algorithm (Levenberg, 1944; Marquardt, 1967). Examples of direct search techniques are the pattern search method and the genetics algorithm. These algorithms are available in the utility program optimtool which is bundled in the MATLAB (2009) Optimization Toolbox. Gradient-based optimization methods such as the Levenberg-Marquardt (LM) algorithm often can be trapped in local minima when the objective function is a complex nonlinear equation involving many variables. The more variables there are, the greater the likelihood for the existence of local minima, saddle points, or long narrow data valleys. Using a gradient method to find the global minimum in the objective function for such cases tends to be problematic. Alternatives to gradient based methods exist in the form of sophisticated global search techniques such as the genetic algorithm (GA) or pattern search (PS). These direct search methods are described in the literature; see, for example, Whitley (1997), and Kolda et al. (2003). We tested these nonlinear optimization techniques for their effectiveness in solving two problems related to microseismic monitoring. The first problem is estimating the velocity values in an earth model knowing the location of a perforation shot source and the arrival times at a receiver array. The second problem is locating the microseismic hypocenter knowing the arrival times at the receiver array as well as the velocity model. We performed the tests using synthetic arrival times calculated by raytracing through a horizontally-layered velocity model. The left panel on Figure 1 is a section view showing the layered-earth velocity model with a microseismic source in a treatment well and an array of geophones in a vertical observation well. The geometry in cylindrical coordinates has azimuthal symmetry about the observation well or the microseismic source. Assuming P-wave velocities, Snell’s Law ray-tracing from the source to the geophone array gives a set of first-arrival times as a function of depth in the observation well.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2010 SEG Annual Meeting, October 17–22, 2010

Paper Number: SEG-2010-3135

Abstract

Summary In order to attenuate the migration artifacts and increase the special resolution of the subsurface reflectivity, conventional migration may be replaced by the least squares migration (LSM). However, this is a costly procedure. To reduce the cost, the feasibility of using the multigrid methods in solving the linear system of prestack Kirchhoff LSM equation is investigated. This study showed that the conventional method of multigrid is not viable to solve Kirchhoff LSM equation for at least two reasons. The main reason is that the Hessian matrix is not a diagonally dominant matrix. Therefore, the conventional iterative solvers of the multigrid are not effective. The performance of Conjugate Gradient (CG) multigrid is discussed. It is shown that since CG does not have a smoothing property, it should not be considered as an effective multigrid iterative solver. Using the CG as an iterative solver for the multigrid may slightly reduces the number of iterations for the same rate of convergence in the CG itself. However, it does not reduce the total computational cost. Introduction The ability to handle incomplete and irregular seismic data is probably the main advantages of the Kirchhoff to the other methods of migration. However, incomplete data produce migration artifacts and may give a blurred image of the earth reflectivity. In order to overcome the migration artifacts, Kirchhoff migration can be augmented by a generalized inverse as an approximation to the exact inverse (Tarantola, 1984). This approach is called LSM (Nemeth et al., 1999; Chavent and Plessix, 1999; Duquet et al., 2000; Kuehl and Sacchi, 2001). The higher resolution images of the LSM can then be used in a forward manner in order to reproduce or interpolate the missing traces (Nemeth, 1999). However, there are two issues associated with replacing conventional migration with LSM. The first problem is that the convergence of the method to the correct solution strongly depends on the accuracy of the background velocity information as shown by Yousefzadeh (2008). Second, LSM consumes more computer time and memory than the migration. CG and CG least squares (CGLS) (Hestenes and Steifel, 1952; Scales, 1987) has been widely used as a solver for the LSM equation (Nemeth et al., 1999; Duquet et al., 2000; Kuehl and Sacchi, 2001; Yousefzadeh, 2008). However, this is an expensive method. In solving the equation with CGLS method, each iteration requires more than two migration running time. Multigrid methods may be another choice. Some PDEs are being solved faster and with the better recovery of the low frequency contents by multigrid than many other methods such as Successive Over Relaxation (SOR) and CG (Stuben, 2002). Using multigrid methods for solving problems in seismic exploration is not a new idea (Bunks et al., 1995; Millar and Bancroft, 2004; Plessix, 2007). In this study, the feasibility of using multigrid properties for solving prestack Kirchhoff time LSM in order to reduce the computational cost or enhance the resolution of the resulted image is investigated. The same study could be done for Kirchhoff prestack depth migration.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2010 SEG Annual Meeting, October 17–22, 2010

Paper Number: SEG-2010-2935

Abstract

Summary The absorbing boundary conditions and the nonreflecting boundary condition are two of the most popular solutions to the computational boundary condition problem. We report our implementations of these boundary conditions within our staggeredgrid finite-difference applications and describe their features. Then we present a method combining the absorbing boundary conditions and the nonreflecting boundary condition. Introduction Computational boundary condition problems have been a persistent topic in the field of wave phenomena modelling. Migration algorithms also have to deal with boundaries. There are a lot of solutions to the boundary condition problems. The most cited method about boundary conditions is the “absorbing boundary conditions” proposed by Engquist and Majda (1977), and Clayton and Engquist (1977). Another popular method called “nonreflecting boundary condition” was presented by Cerjan, Kosloff, Kosloff, and Reshef in 1985. There are some other solutions as well, such as “transparent boundary” by Long and Liow (1990) and “perfectly matched layer” method by Collino and Tsogka (2001). This abstract reviews the absorbing and nonreflecting boundary conditions first, and then presents the combined boundary conditions. Boundary conditions The elastic modeling method we use is based on the Madariaga-Virieux staggered-grid scheme (Virieux, 1986). To illustrate the boundary conditions, a subsurface model, which contains a point diffractor in a homogenous medium, is designed. Figure 1 shows the geometry and the P-wave velocities, although the real subsurface model parameters used by the modeling algorithm are densities and Lame coefficients. The reflections (PP and PS) from the absorbing boundary are attenuated to a low level compared to the rigid boundary, but, the absorbing boundary reflections are still stronger than the diffractor reflections, which means that the artifacts may mask the true reflections inside the medium. A nonreflecting boundary condition (Cerjan, Kosloff, Kosloff, and Reshef, 1985) employs a strip of nodes on the boundary to attenuate wave amplitudes. Firstly, the wave is weakened towards to the outside boundary, which means the reflection from the outside boundary will be attenuated. The second effect is that, when wave enters the energy absorbing strip, it “sees” the change in impedance of the medium and then part of the wave energy will be reflected back. Thus, for a strip width of N , there seems to exist N fictitious reflectors. Hence, there are two kinds of reflections generated from the nonreflecting boundary. One is from the fictitious reflectors; the other one is from the outside rigid boundary. The constante affects both reflections. Conclusions The conventional absorbing boundary conditions reduce computational edge artifacts to a low level, but these artifacts may still mask weak reflections. The nonreflecting boundary condition produces two parts of reflections: one is from the attenuation strip which works like fictitious reflectors; the other one is from the outside boundary. To reduce both reflections, the nonreflecting strip needs to be wide, and this leads to more computational cost. The combination the absorbing boundary conditions with the nonreflecting boundary condition results in fewer boundary artifacts with little additional computational cost.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2010 SEG Annual Meeting, October 17–22, 2010

Paper Number: SEG-2010-2191

Abstract

Summary Analytic solutions for estimating the location of a microseismic event using first arrival times are presented. The solutions are based on the Apollonius method which fails when the four receivers are coplanar or collinear. Two analytic methods are presented for four coplanar receivers at the corners of a square, and for three collinear and equally spaced receivers. These analytic solutions are assumed to part of a larger system of receivers composed of a square grid of receivers or equally spaced receivers in a well. An additional method is presented that is based on the intersection of vectors defined from the center of the receivers to the analytic solutions. The sensitivity in estimating the source location is illustrated with receivers in a vertical well. Introduction The Apollonius method is an analytic solution that directly computes a microseismic source time and location using the first arrival times at four arbitrarily located receivers Bancroft and Du (2007), Bancroft et.al (2009)’. This solution fails if the receivers are coplanar or collinear, and may produce poor results when the source is located at specific locations relative to the receiver locations. Simpler solutions are presented for four coplanar receivers at the corners of a square, and for collinear receivers. We assume that the receivers used in the analytic solutions are part of a larger grid system, such as many receivers on the surface, or in a well. Consider sixteen receivers on a 4x4 grid with a separation distance h . There will be nine h x h squares, four 2 h x2 h squares, and one 3 h x3 h square on the perimeter. Each of these squares can be used to estimate a source location ( x 0, y 0, z 0). Three collinear receivers in a vertical can only compute a radial offset r 0, depth z 0, and the source time t 0. When estimating the three variables ( r , z , t 0), only the first arrival times of three equally spaced collinear receivers are required. A vertical array of six receivers spaced with an interval h will produce five possible receiver groups with interval h , four with interval 2 h , three with interval 3 h , and two with interval 4 h . These fifteen groups will each produce an estimate of the source location. The above estimate will produce results that match the machine accuracy when there is no error in the estimate of the arrival times or the locations of the receivers. We will assume that the velocity and location of the receivers are known exactly, but there is a noise error in the estimated arrival times (jitter) at the receivers. We then estimate the error in the source location for different levels of jitter. The final estimate of the source location is typically the mean of each component ( x 0 , y 0, z 0) or ( r 0, z 0) from the many possible combinations of receivers in large arrays of receivers. An alternate method computes vectors to each of the initial estimated source locations and the intersection of these vectors provides a new estimate of the overall solution.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2009 SEG Annual Meeting, October 25–30, 2009

Paper Number: SEG-2009-1557

Abstract

Summary The first arrival clock-times from a number of receivers are used to estimate the clock-time of a microseismic event and to estimate its 3D location. A 2D source location using the Apollonius method was extended to 3D. However, the 2D solution is restricted to three non-collinear receivers and the 3D solution required four non-coplanar receivers. These restrictions are typically violated when receivers are placed on the surface or in a well. Non Apollonius solutions are presented for these restricted cases along with analysis that relates the accuracy of the estimated source location to the accuracy of the velocity. Introduction The problem of identifying the location of a source event has application to defining the distribution of well fraccing material or CO2 sequestration, and the location of impending geological hazards such as landslides or major earthquakes. Other areas for applications of these techniques are converting raypath traveltimes to gridded traveltimes, sniper locating, or global positioning. Microseismic events may be located by a number of techniques that include a three component receiver that uses the three components to estimate the arrival direction of the wavefield, and then use the difference in P and S-wave arrival times to estimate a distance. Other techniques use first arrival clock-times then “search over a grid of hypothesizes source locations”, (Daku et al 2004) or the wavefield propagation of many surface receivers (Chambers et al 2008). The approach in this paper uses the first arrival clock-times from a number of receivers to estimate the clock-time of the event and to estimate its location. Earlier papers have shown how the 2D problem of locating the source was solved by constructing a circle tangent to three other circles. These circles were centered at the receiver location and had a radius proportional to the clock-times of three non-collinear receivers. The construction of a tangent circle to three other circles was solved by Apollonius about 200 years B.C., and many algebraic solutions based on the geometrical solution have been derived. We presented a 3D solution to the problem that was based on the 2D solution that allowed the estimation of the clock-time and location of the source (Bancroft and Du 2006 and 2007). The 3D solution required the construction of a sphere that was tangent to four other spheres, with restrictions that the center of the circles (receivers locations) be non coplanar or non collinear. We present solutions for these cases where four receivers can be coplanar on a square grid on the surface, or three equally spaced collinear receivers in a well. The solution for three collinear receivers becomes a 2D problem and is only able to solve the depth and radial distance of the source from the well. Is cannot estimate the azimuth. We will review the 2D Apollonius solution as the geometry is viewable with circles in a 2D figure and not the spheres of the 3D solution. The problem of the Apollonius solution is identified and new solutions introduced. The sensitivity of the Apollonius solution and new solutions will be discussed relative to the accuracy of the velocity.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2007 SEG Annual Meeting, September 23–28, 2007

Paper Number: SEG-2007-0279

Abstract

Summary Numerous approaches have been published which derive fluid indicators (often called direct hydrocarbon indicators, or DHI) from AVO equations. The main idea behind these methods is to use the linearized Zoeppritz equations to extract petrophysical parameters such as P-impedance, Simpedance, bulk modulus, shear modulus, Lamé’s parameters, Poisson’s ratio, etc. and, from cross-plots of these parameters, infer the fluid content. Often, these indicators provide a good tool to quickly identify hydrocarbon zones. But the question of which is the best approach is still under debate. The purpose of this study is to examine which indicator can most easily discriminate a gas/oil sand from its background geology and which indicator is most sensitive to pore-fluid content estimation. Introduction The fluid factor (?F) was proposed by Smith and Gidlow (1987), and was derived by combining the linearized AVO equation with the mudrock line (Castagna et al., 1985). The authors also combined density and P-wave velocity changes by using Gardner’s equation (Gardner et al., 1974). A version of the fluid factor which utilized density was introduced by Fatti et al. (1994). Goodway et al. (1997) suggested that Lamé’s elastic parameters ? and ì and their products with density could be useful tools in AVO analysis. Gray et al. (1999) showed how to estimate the parameters ìñ and ëñ more directly by a new parameterization of the linearized AVO equation, as does Chen (1999). Russell et al. (2003) introduced the attribute Ip 2 -cIs 2 with c being a function of local (Vp/Vs) 2 , where Vp and Vs are the dry rock P-wave and S-wave velocities. In this study we used Gassmann fluid substitution to model changes in these parameters at given reservoir conditions, in order to analysis the sensitivity of each fluid hydrocarbon indicator. Methodology The standard approach to AVO analysis is well known and was derived by Shuey (1985) based on the Aki-Richards (Aki and Richards, 1980) linearized formulation of the Zoeppritz equations. This is based on the observation that low impedance gas sands encased in shale will have larger negative AVO intercepts ( A ) and a larger negative AVO gradient ( B ) than those not associated with gas. Thus A*B should be an excellent indicator of class III type gas sand. Furthermore, as shown by Swan (1993), product indicators have excellent S/N characteristics and may exhibit some degree of immunity to mild phase and velocity errors. On the other hand, A*B will very effectively screen out Rutherford Class I (positive A , negative B ) and Class II ( A near-zero, negative B ) gas sands. It can also be shown that, if Vp/Vs = 2, the scaled sum A+B gives an estimate of Ds, the Poisson reflectivity where s is Poisson’s ratio, and the scaled difference A-B gives an estimate of Rs(0) , the zero offset shear-wave reflectivity. It has been noted that the Fluid Factor and Poisson reflectivity are equivalent when Vp/Vs =2. Lithologies that do not follow the Vp and Vs relationship for brine saturated clastics may produce a fluid factor anomaly.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2007 SEG Annual Meeting, September 23–28, 2007

Paper Number: SEG-2007-1943

Abstract

Summary In this study, 2D multicomponent seismic data and well logs from the Willesden Green, Alberta area are used to investigate an oil reservoir interval. The Upper Cretaceous (Turonian) Second White Speckled Shale (2WS) represents the zone of interest. PP and PS synthetic seismograms generated from well logs correlate reasonably with the surface seismic data. PP and PS inversion was applied to the vertical and radial components to yield P and S impedance. The geologic model consists of 2WS shale interspersed with sand, limestone, gas and oil, giving rise to a low Vp/Vs ratio. The oil-saturated 2WS interval shows a P-wave impedance decrease and S impedance increase. The Vp/Vs estimate shows anomalous values over the zones of interest around the producing wells: 8-13-41-6W5; 8-26- 41-6W5 and 6-15-41-6W5. Introduction The Willesden Green oilfield is located in south-central Alberta, covers 50,827 hectares, and is the second largest Cardium field after Pembina (both in area and initial oil in place). Several productive horizons in this area (the Second White Speckled Shale, Cardium, Viking and Glauconitic sands) continue to produce significant quantities of oil and gas. The 2WS is picked on geophysical logs due to its high gamma response. As calcite percentage in the source rock increases toward pure limestone, the hydrocarbon potential decreases. A number of wells in the proximity have produced and still produce oil and gas from the 2WS. Because a number of penetrations of the 2WS shales have not produced oil, conventional P-wave prospecting was considered inadequate. In attempt to better delineate productive zones, multicomponent seismic surveys were undertaken and converted-waves (PS) were analysed along with the P-wave data. Seismic Acquisition In 1992, two 3-C seismic lines were acquired by Response Seismic Ltd. WG-1 an E-NE line crossed by WG-2 an NNW seismic line. The location map is shown in Figure 1. The lines were processed for PP and PS reflections, including anisotropic analysis. VSP data acquired with vertical vibrator sources on line WG-2 were used for a more confident interpretation. The surface seismic surveys employed two vertical vibrators with 3-C receivers up to 2520m offset. A 60m source interval and 20m receiver interval were used. Seismic Processing The two lines were reprocessed in 2004 at Sensor Geophysical Ltd. Vertical and radial migrated stack were generated. The processing flow for the PP section was conventional and included surface consistent deconvolution, time-variant spectral whitening, refraction statics, trim statics, CDP stack and migration. The processing flow for the PS section included asymptotic CDP binning, surface consistent deconvolution, refraction statics, trim statics, CDP stack and migration. PP, PS Interpretation and Inversions Initial interpretation of the original data was undertaken by Stewart et al. (1993). The overall goal was to see if 3-C seismic data could help find oil in the productive interval of the Second White Speckled Shale. They found some anomalies in the areas of known production. But, because data processing and interpretation have advanced considerably since then, we were enthusiastic to revisit this data set.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2006 SEG Annual Meeting, October 1–6, 2006

Paper Number: SEG-2006-2231

Abstract

ABSTRACT A method is presented for identifying the source of a locally circular or spherical wavefront given the traveltimes at arbitrary locations. For 2D data, the center of the circular wavefront is computed from three traveltimes recorded at three arbitrary locations. Applications to 3D data requires four traveltimes recorded at four arbitrary locations. This method is suited for a number of applications such as mapping traveltimes that are computed along sparse raypaths to gridded traveltimes, the monitoring of microseismic events caused by fraccing, or to the possible prediction of landslides in geologically unstable areas. The analytic solutions for both 2D and 3D produce two solutions from which one must be chosen based on neighbouring conditions or by using another known traveltime and its location.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2006 SEG Annual Meeting, October 1–6, 2006

Paper Number: SEG-2006-2534

Abstract

ABSTRACT Downward continuation methods assume that extrapolation takes place between two planes, but most land surveys are acquired over irregular surfaces. Most approaches that allow downward continuation methods to handle such data, like the wave equation datuming and zero-velocity layer methods, require some processing prior to migration. In this paper, we show how explicit wavefield extrapolation methods in the space-frequency domain can efficiently extrapolate data directly from rugged topography. By building operators with different depth steps, these wavefield extrapolators can handle lateral velocity and topographic variations. We use a source-receiver migration technique to illustrate how this approach can be implemented, using a synthetic dataset as an example.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2006 SEG Annual Meeting, October 1–6, 2006

Paper Number: SEG-2006-3091

Abstract

ABSTRACT The surface consistent equations always have one or more singular values, depending on the configuration of the seismic survey. These singular values slow convergence, add uncertainty, and make it difficult to resolve the long wavelengths in the solution. Multigrid methods possess a greater ability to resolve long wavelength terms than Gauss-Seidel methods that are currently in use. These improved solutions are calculated at little or no additional computational cost. While total convergence is not guaranteed, multigrid methods seem to be able to universally improve the quality of surface consistent decomposition. There are some limitations we reach in solving the surface consistent equations. An attempt is made to further justify our previous conclusion (Millar and Bancroft, 2004) that some of the long wavelength drift that can plague Gauss-Seidel solutions is theoretically avoidable. In more or less the same amount of computer time using multigrid techniques we are getting more accurate synthetic solutions. We see how the quality of our solution depends on the geometry of the survey, and the role singular values play in the solution. Lastly, we explore the challenges of including of a time variant term in the equations as well.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2006 SEG Annual Meeting, October 1–6, 2006

Paper Number: SEG-2006-0179

Abstract

Summary Thick anisotropic sequences of dipping sandstones and shales often overlie reservoirs in fold and thrust belts, such as the Canadian Foothills. In these cases, such an assumption, when anisotropy is negligible or only anisotropy with vertical symmetry axis (VTI) is considered, may result in imaging problems and mispositioning errors. Three prestack anisotropic migration algorithms based on totally different principles, Kirchhoff, phase-shift-plusinterpolation (PSPI) and reverse-time (RT), are presented for dipping TI media. Derived from the isotropic Kirchhoff, PSPI and reverse-time migration methods, these three algorithms each inherit different characteristics of accuracy and efficiency. The ray-tracing algorithm used in 2-D prestack Kirchhoff depth migration is modified to calculate the traveltime in the presence of TI media with a tilted symmetry axis. Based on an analytical solution of vertical wavenumber for dipping TI media and an assumption for the relationship between anisotropic parameters versus lateral velocities, the prestack anisotropic PSPI migration can handle lateral variable anisotropic parameters and velocities. The prestack anisotropic reverse-time algorithm employs the weak-anisotropy approximation to obtain the individual P wave equation and implements depth migration with the pseudo-spectral method. An example of migration on physical data with these three algorithms shows improved imaging results from considering anisotropy parameters and different characteristics for each method. Introduction Hydrocarbon resource exploration and development projects are in areas containing dipping anisotropic sequences, such as in the Canadian Foothills (Isaac and Lawton, 1999). In these cases, depth migrations with either an isotropic migration algorithm or a vertical axis of symmetry (VTI) assumption will result in imaging problems and mispositioning errors. Anisotropic depth migration is required to correctly locate images when dipping transversely isotropic (TI) strata are present. Some advanced migration methods have been extended from isotropic to anisotropic media. Anisotropic depth migration methods, as with isotropic methods, can be based on various approaches such as ray-tracing, one-way equation, and full-wave equation. The prestack anisotropic Kirchhoff migration method presented in this paper is based on the ray-tracing theory. The prestack anisotropic PSPI starts from the one-way wave equation and carried out downward-continued wavefield extrapolation. Prestack anisotropic reverse-time migration achieves recursive extrapolation backward in time with the full wave equation. Three representative methods are chosen to demonstrate the characteristics of Kirchhoff, PSPI and reverse-time migration for dipping TI media in terms of performance, accuracy and efficiency. In this paper, we will first introduce the theory of three anisotropic migration methods, give certain analysis for each algorithm, and take into account the increase in calculation time between each anisotropic migration algorithm and the corresponding isotropic case. Through a physical model example, we demonstrate the performance of anisotropic migration algorithms and give some evaluations of these three anisotropic migration methods. Theory To illustrate the difference among anisotropic Kirchhoff, PSPI and reverse-time migration algorithms, we focus on the core techniques for each anisotropic algorithm. Anisotropic Kirchhoff depth migration The difference between the anisotropic and isotropic Kirchhoff migration algorithms lies in the traveltime calculation without changing the Kirchhoff algorithm itself.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2006 SEG Annual Meeting, October 1–6, 2006

Paper Number: SEG-2006-0164

Abstract

Introduction Summary The purpose of this study is to estimate anisotropy parameters for P-wave, in orthorhombic media by using the shifted hyperbola NMO equation. We propose a method for estimating the anisotropy parameters independentof the vertical velocity; in fact we also estimate verticalvelocity. This method was tested on the data collected over a synthetic model with orthorhombic symmetry.This method fails when i.e if the symmetry of the medium is elliptical. In this paper we will be detailing the procedure to estimate the anisotropy parameters which characterize orthorhombic media. Determination of (the short offset effect) is relatively easy but (long offset effect) is difficult and needs a measure of horizontal velocity, which is difficult to measure. In this study the long offset move out information is used for estimation. Usually Dix type normal moveout (NMO) correction at long offsets is not very accurate and it even worsens when there is anisotropy present(Castle, 1994). The Shifted hyperbola NMO (SHNMO) equation is more accurate at longer offsets than Dix NMO equation . Therefore by using the Shifted hyperbola NMO equation to correct long offset data we get a better estimation of RMS velocity (therefore better interval velocity) and so a better estimation of and The main technique used to estimate the seismic velocity is to fit a move out (NMO) hyperbola to the travel time curve. The main methods used to perform the velocity analysis are Normal Move out Analysis and Shifted hyperbola NMO (SHNMO) analysis. Shifted NMO hyperbola equation Castle (Castle, 1994) derived a new approximation to the NMO equation using the principles of reciprocity,finite slowness and exact constant velocity limit. For “reasonable” offsets, his approximation, termed asSHNMO, and can be written in different ways, Determination of without the knowledge of v0 Advantages of SHNMO The following are the advantages of SHNMO over the Dix NMO hyperbola: • Accurate up to fourth order in offset, • can approximate higher orders in media with ”lesser” in homogeneity, • easier to implement than other higher order approximations and shift parameter provides vital information on the anisotropy parameters. We will now discuss the procedure for estimation of anisotropy parameters in detail. As seen in the above section, it is impossible to measure the anisotropy parameter without information v0. The following method can be used to overcome this difficulty of prior information of v0. The first step in using this method is to estimate the value of as described in the above section. This is accomplished by fitting a higher order NMO curve to the travel time data. We fit a SHNMO curve with the shift parameter s varying with offset. The next step involves in relating s to the parameter <br clear="all" /> Implementation The SHNMO is a non-linear equation and therefore linear inversion techniques like least square inversion fail. A random walk technique like Simulated annealing would serve the purpose of inverting the move out equation for both s and v nmo .