Reducing Risk in the Petroleum Industry


Machine Data and Human Intelligence

Категория: Метки: , , ,


The oil and gas industry was one of the first aggregators of what we now call “big data,” but the amount of information these companies currently collect is truly unprecedented. In 1990, one square kilometer yielded 300 megabytes of seismic data; in 2015, it was 10 petabytes—33 million times more. This report features highlights from recent Strata+Hadoop World conferences to demonstrate how the petroleum industry uses data science in their operations today.

Oil companies use machine learning to mitigate short-term operational risk and to optimize long-term reservoir management. But, as author Naveen Viswanath explains, machine learning models alone can’t distinguish between good and bad data or reasonable and unreasonable results. Human intelligence—including a deep understanding of how data sources fit into business use cases—is crucial for making these distinctions.

With this report, you’ll learn the challenges these companies face when collecting a variety of data for seismic research, drilling, mechanical maintenance, worldwide logistics, and even gas station retail.


O'Reilly Media, Inc.

O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, research, and conferences. Since 1978, O'Reilly has been a chronicler and catalyst of leading-edge development, homing in on the technology trends that really matter and galvanizing their adoption by amplifying "faint signals" from the alpha geeks who are creating the future. An active participant in the technology community, the company has a long history of advocacy, meme-making, and evangelism.

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *