
PROFITING
THROUGH PERFORMANCE
true for small tomid-sized oil and gas companies that are limited by the
large upfront cost of traditional on-premiseHPC systems. By using the
cloud, these companies have access to the latest hardware generations at
an entry price point that ismuch lower than on-premise systems. Reservoir
simulation is a natural fit for the cloud due to its cyclical usage. The duty
cycle of reservoir simulationwithinmost companies dramatically shifts up
and down as projects and deadlines come and go. The option to only pay
for the systemswhen they are being used is appealing tomany companies,
especially if their reservoir simulation usage changesmonth tomonth. The
drawbacks of using applications such as reservoir simulation in the cloud
are based around security concerns and the economics in high usage cases.
Even so, there is no denying that cloud technology has had a profound
impact on theHPC community. Australian oil and gas producer Woodside
is an example of a company that now runs all of its HPC exclusively on the
cloud. They have found that the burst-like nature of reservoir simulation
iswell-matched to the dynamics of the cloud. While on one day there
may be no simulations required, the nextmay demand tens of thousands
of concurrentmodels. Costs aremore directly tied to the duration and
resources consumed by each simulation as opposed to on-premise options;
however, the speed at which results are generated from inputs due to
parallel executionmeans decisions are accelerated – the value of which is
much higher than the incurred cost of immediately scalable simulation.
T
heway inwhich oil and gas companies value and use data in their
business has evolved rapidly in recent years. Some of this has been
driven by the 2015 downturn and the unrelenting pressure on energy
companies to becomemore efficient and to lower costs. These effortswill
become evenmore critical as the industry climbs out of the current slump
caused by the COVID-19 pandemic. Historically cautious energy companies
are nowaggressively pursuing new technologies that can accelerate this
process. This especially holds true in theway companies generate, process
and use data tomakemore rapid and statistically sound decisions that
can impact their business. An illustrative example is the emergence of
new technologies such asmachine learning, and the changingways that
companies use existing tools such as reservoir simulation.
A key goal of reservoir simulation is reducing the uncertainty in
forecasting. Uncertainty is introduced to reservoirmodelling primarily
by the incomplete or imprecise knowledge obtained fromsubsurface
measurements. The seismic andwell data used to create reservoirmodels is
by nature sparse and overlaidwithmany assumptions and approximations
that guide its filtering and analysis. It is important to properly represent
the uncertainty in the reservoirmodelling process so that decision-makers
understand the risk associatedwith each decision.
The traditional approach tomodelling dynamic reservoirmodels relies
on a singlemodel or a small number of scenarios that are representedwith a
high, medium, and lowprobability. Thesemodels are used to represent the
‘best guess’ of the features of the reservoir and are used tomake production
and investment decisions for the asset. By using such a small sample size of
models to represent the reservoir, engineers are thinly sampling the space
of possible outcomes. The bottleneck for using a larger sample size of data
has historically been due to the limitations of the reservoir simulation tools
available in the industry; therewas simply not enough time or resources to
carry out a complete survey of model uncertainty. This has begun to change
in the last decade for two essential reasons.
Highperformancecomputing
The first is the evolution of the high performance computing (HPC) industry
and the emergence of faster and cheaper hardware. As the increase in
clock speeds of central processing units (CPUs) began to level off in the
mid-2000s, theHPCmarket shifted tomulti-core development by putting
multiple cores on a single processor socket server. This led to a dramatic
performance increase for processes that were able to be executed on several
cores simultaneously. Performance continued to increase asmore cores
were addedwith each newgeneration of processors. As themarketmoved
intomulti-core development another key technology, graphics processing
units (GPUs), emerged in theHPC industry. GPUs contain thousands of
small, efficient cores that work simultaneously. Theywere traditionally
used for fast 3D game rendering but began to be harnessedmore broadly
to accelerate computational workloads. Not all applications could take
advantage of this newhardware, but those that could showed remarkable
speedup. The top commercial supercomputers in the industry, such as Eni’s
HPC5 and Total’s Pangea III, are bothmassive GPU-based clusters.
The emergence of cloud services has also shaped theHPCmarket by
makingmodern hardwaremore accessible to companies. This is especially
Brad Tolbert, Stone Ridge Technology, USA,
discusses the evolution
of high performance computing and reservoir simulation in the industry.
|
23