Sunday, April 29, 2018

Domain where Hadoop can be used: OIL AND GAS

Back To Page

Accelerate Innovation with Well Log Analytics (aka LAS Analytics)

Large, complex datasets and rigid data models limit the pace of innovation for exploration and production, because they require petrophysicists and geoscientists to work with siloed, complex datasets that require a manual quality control (QC) process. LAS log analytics with HDP big data analytics for oil and gas allows scientists to ingest and query their disparate LAS data for use in predictive models. They can do this while leveraging existing statistical tools such as SAS or R to build new models and then rapidly iterate them with billions of measurements. Combining LAS data with production, lease, and treatment data can increase production and margins. Dynamic well logs normalize and merge 100s or 1000s of LAS files, providing a single view of well log curves, presented as new LAS files or images. With HDP, those consolidated logs also include much of the sensor data that used to be “out of normal range” because of anomalous readings from power spikes, calibration errors, and other exceptions. With HDP, an automated QC process can ingest all the data (good and bad) then scrub it to eliminate the anomalous readings and present a clear, single view of the data.

Define Operational Set Points for Each Well & Receive Alerts on Deviations

After identifying the ideal operating parameters (e.g. pump rates or fluid temperatures) that produce oil and gas at the highest margins, that information can go into a set point playbook. Maintaining the best set points for a well in real-time is a job for Apache Storm’s fault-tolerant, real-time oil and gas predictive analytics and alerts. Storm running in Hadoop can monitor variables like pump pressures, RPMs, flow rates, and temperatures, and then take corrective action if any of these set points deviate from pre-determined ranges. This data-rich framework helps the well operator save money and adjust operations as conditions change.

Optimize Lease Bidding with Reliable Yield Predictions

Oil and gas companies bid for multi-year leases to exploration and drilling rights on federal or private land. The price paid for the lease is the known present cost paid to access a future, unpredictable stream of hydrocarbons. The well lessor can outbid his competitors by reducing the uncertainty around that future benefit and more accurately predicting the well’s yield. Apache Hadoop can provide this competitive edge by efficiently storing image files, sensor data and seismic measurements. This adds missing context to any third-party survey of the tract open for bidding. The company that possesses that unique information with predictive analytics can now pass on a lease that they might otherwise have pursued, or they can find “diamonds in the rough” and lease those at a discount.

Repair Equipment Preventatively with Targeted Maintenance

Traditionally, operators gathered data on the status of pumps and wells through physical inspections (often in remote locations). This meant that inspection data was sparse and difficult to access, particularly considering the high value of the equipment in question and the potential health and safety impacts of accidents. Now, oil and gas IoT sensor data can stream into Hadoop from pumps, wells and other equipment much more frequently—and at lower cost—than collecting the same data manually. This helps guide skilled workers to do what sensors cannot: repair or replace machines. The machine data can be enriched with other data streams on weather, seismic activity or social media sentiment, to paint a more complete picture of what’s happening in the field. Algorithms then parse that large, multifaceted data set in Hadoop to discover subtle patterns and compare expected with actual outcomes. Did a piece of equipment fail sooner than expected, and if so, what similar gear might be at risk of doing the same? Data-driven, preventative upkeep keeps equipment running with less risk of accident and lower maintenance costs.

Slow Decline Curves with Production Parameter Optimization

Oil companies need to manage the decline in production from their existing wells, since new discoveries are harder and harder to come by. Decline Curve Analysis (DCA) uses past production from a well to estimate future output. However, historic data usually shows constant production rates, whereas a well’s decline towards the end of its life follows a non-linear pattern—it usually declines more quickly as it depletes. When it comes to a well near the end of its life, past is not prologue. Production parameter optimization is intelligent management of the parameters that maximize a well’s useful life, such as pressures, flow rates, and thermal characteristics of injected fluid mixtures. Machine learning algorithms can analyze massive volumes of sensor data from multiple wells to determine the best combination of these controllable parameters. HDP’s powerful capabilities for data discovery and subsequent big data analytics for oil and gas analysis can help the well’s owner or lessee make the most of that resource.

No comments:

Post a Comment

Kafka Architecture

Apache Kafka is a distributed publish-subscribe messaging system and a robust queue that can handle a high volume of data and enables you t...