The oil and gas industry was one of the first aggregators of what we now call “big data,” but the amount of information these companies currently collect is truly unprecedented. In 1990, one square kilometer yielded 300 megabytes of seismic data; in 2015, it was 10 petabytes—33 million times more. This report features highlights from recent Strata+Hadoop World conferences to demonstrate how the petroleum industry uses data science in their operations today.
Oil companies use machine learning to mitigate short-term operational risk and to optimize long-term reservoir management. But, as author Naveen Viswanath explains, machine learning models alone can’t distinguish between good and bad data or reasonable and unreasonable results. Human intelligence—including a deep understanding of how data sources fit into business use cases—is crucial for making these distinctions.
With this report, you’ll learn the challenges these companies face when collecting a variety of data for seismic research, drilling, mechanical maintenance, worldwide logistics, and even gas station retail.