The adoption of big data solutions changes businesses drastically. In the oil industry, it yields a wide range of direct and indirect benefits. And indeed, data capture solutions are actively employed by oil-producing companies. These operators utilize a diverse range of platforms and tools, including automation systems, automatic process control systems, SCADA (Supervisory Control And Data Acquisition), and field data warehouses in the form of databases, logs, and virtual models. However, what is most important is not the big data itself but the integration of all these data and data processing tools into one effective and transparent system that makes all the industrial processes visible and manageable.
Growing challenge for the Middle East and beyond
Let’s start with the beginning, namely with the well data acquired during field development, like geophysical and production data. Even one oil field can generate the amount of data equivalent to the total volume of information created digitally by humans before 2003. Such a massive influx of figures, numbers, reports, and documents demands high-capacity, robust solutions and intelligent analytics to address current business objectives effectively.
In reality, as we at Axellect have seen in our global practice, this data is often scattered across different platforms, hindering the ability to derive valuable insights from operations and assets and impeding the efficient and environmentally conscious planning of activities based on a comprehensive view of events. So, a significantly more extensive integration of geological and information systems is needed to establish a truly effective digital field.
This issue is of utter importance for the Middle East, considering the regional heavy investment in expanding oil-producing capacity. According to one of the latest PwC reports, the OPEC countries, especially those in GCC, plan to significantly increase their drilling capacity, for example, in Kuwait by up to 25%.
The leading players, the UAE and Saudi Arabia make the most progress and raise their ambitious bar even higher. For instance, ADNOC (Abu Dhabi National Oil Company) originally planned to boost its capacity from 4 to 5 million barrels a day (b/d) by 2030 but has eventually set this target for 2027. In its turn, Saudi Aramco expects to increase its maximum oil output spare capacity to 13 million b/d by 2027, from about 12 million b/d currently, and raise gas production by 50% by the end of this decade.
In the prevailing scenario where the world remains reliant on hydrocarbons, oil and gas producers in the Middle East region see far more merit in investing in building upstream production potential. What is vital here is the right technology and approach.
Choosing the right tech solution
Today, the enterprise application market provides established tools for data analytics and integration into a unified view. These tools include enterprise data warehouses, Data Lakes, and a variety of Enterprise Service Buses (ESB) that seamlessly merge existing services into an efficient platform for generating the «new oil» – big data.
To determine the appropriate solution and methodology, it’s essential to define a business problem, a specific need that integration platforms should address within the context of a company-wide digitalization effort. Whether it involves seeking more efficient drilling methods, time savings, or employee risk mitigation should be established during the tool selection process. It’s crucial to emphasize that selecting an objective that aligns with a company’s business strategy and is consistent with the plans of its management and its vision for the future is the fundamental step. Without this foundation, implementing any IT service becomes merely playing with technology.
During the initial stages of deploying big data analytics tools, cost-effective solutions, both from a financial and infrastructure perspective, can be explored. These might embrace ESBs, including their open-source and proprietary versions. These integration platforms, equipped with standard tools like data import/export and API support, facilitate the connection of different existing systems. In the early phases of small-field development, this platform can serve as the basis for future evolution.
Scaling can be achieved by implementing a Data Lake, but its establishment entails substantial investment. Customers will need to allocate resources for robust hardware, massive server infrastructure and enhanced computing capacity. In the case of large fields where data production reaches terabytes in volume, it may necessitate the construction of an in-house Enterprise Data Warehouse (EDW) and/or a data center.
Once hardware-related issues have been settled, the company should start a sorting process, depending on the assigned business tasks. The Data Lake should only receive data that is relevant to accomplishing the stated goals. System configuration should be based on discretization, involving the determination of the time intervals for information updates and the format of data – whether structured or unstructured – stored within the data warehouse.
The third phase of implementing a Data Lake involves ensuring data security. Typically, an intermediary zone is established for the customer to create a clear separation between the corporate and public networks.
The final challenge involves integrating with legacy systems that lack support for APIs. This issue can be addressed by employing backups and uploading data in SQL format. In cases like these, proprietary protocols such as OPC DA and OPC UA can be developed to facilitate data exchange and ensure cross-platform compatibility.
The Data Lake provides a significant advantage to companies aiming for a comprehensive and real-time overview of their operations. The information retrieved from the lake will serve a wide range of products, including SAP ERP systems and inventory control platforms. To maximize its positive impact, it’s crucial to ensure that these systems receive live data without any delays.
Open opportunities
Hence, the Data Lake serves as a powerful data integration tool, enhancing internal collaboration across departments. There are numerous solutions available for deployment in the upstream sector. For instance, Schlumberger has developed one of the most widely used integration solutions in the oil industry: the cloud-based Delfi environment. This product encompasses a range of specialized capabilities, including the DrillPlan solution for drilling and the ProdOps solution for production operations, among others. The Data Lake is fed with data, which is then processed and transmitted in a ready-to-use format to modules already familiar to industry experts.
It’s important to note that while data integration often doesn’t yield immediately visible financial results, it does bring significant benefits in terms of operational efficiency.
Firstly, geology and geophysics experts can focus on their core tasks instead of wasting time on non-essential operations like data retrieval and uploading. Integration tools enable them to concentrate on their primary responsibilities, thereby boosting efficiency.
Secondly, having a comprehensive view of the situation allows business leaders and managers to improve their planning quality significantly. Top- and mid-level management gains the ability to respond swiftly to changes in the stock market, asset valuation, listings, audits, and more.
Finally, data integration generally enhances internal collaboration across departments. Chaotic situations often occur on oil fields in these areas, impacting contractors too. Well-organized interaction with big data, for example, greatly simplifies warehouse operations.