New Model Needed for Oil, Gas Data Communications

0
129

The increasing remoteness of global onshore and offshore oil and gas exploration and production efforts – coupled with the growing amount of data being gathered from devices and applications – has created the need for a new distributed architecture for remote technologies. Officials with Portland, Maine-based Kepware Technologies, which develops communication and interoperability software solutions for the automation industry, say an alternative to the Host-Centric model that is used by the oil and gas industry is needed.

Under this new model, a single data collector can exist on any computer tied to a communications network, and can service multiple client applications interested in the same data from the same devices. Multiple data collectors can be spread out on multiple computers, each with their own exclusive connection to the network, allowing communications to run at the same time across the various device types. This shortens the overall time and costs for acquiring data and for pay-per-byte connections. Data collectors also would be placed in the field close to devices to reduce the amount of interference.

“By removing the inefficiency of making repeated requests, less bandwidth is needed to provide the same data set,” according to a Kepware white paper authored by Tony Paine, president and CEO of Kepware, and Russel Treat, president and CEO of EnergySys Corporation, to introduce the idea.

This architecture allows companies to minimize points of failure within their system, said Paine and Treat. “By distributing the data collection from the client applications, we have introduced an abstraction layer between the vendor-specific protocol and the sharing of the information contained within the protocol.”

Currently, oil and gas companies are using a host-centric model for data collection, where data collection resides on the same computer as the SCADA host. Data collection can be owned by the supervisory control and data acquisition (SCADA) polling engine, which must contain the required protocol drivers that are used to pull data directly from the field devices. In some cases, separate standalone applications that expose a generic interface may collect data between the applications and field devices. A protocol is a uniform set of rules that allow two electronic items to connect to and exchange information with one another. 

However, this model has shortcomings in terms of bandwidth, cost and security concerns. Available bandwidth can quickly be diminished as more applications and devices are added, increasing the throughput of communications over the network. As a result, data requests may be periodically dropped and never make it to the device. If multiple data collectors are required to retrieve all the data of interest to each application, and each requires exclusive access to the communications medium, the request and response transactions must be processed serially. This means a delay in one transaction has an additive impact on the overall communications cycle because the next transaction cannot be sent until the previous transaction is completed or timed out.

“Unfortunately, the many types of field devices that originate from a wide variety of vendors do not support a universal protocol,” said Paine and Treat. “As such, there is a 1:1 correlation between the number of data collectors required to run on the host communication server and the number of data collectors required to run on the host communication server and the number of vendor-specific device types that are part of the overall operation.”

The inefficiency of multiple data collectors making requests to the same devices at the same time also can raise costs in cases where there is a cost-per-byte for the data being transmitted. Many of the vendor-specific protocols were developed with knowledge of these bandwidth limitations and cost concerns. As a result, vendors have focused on engineering these protocols down to the bare minimum needed to access data within the device. Paine and Treat noted that these protocols are inherently unsecure, and could be subject to man in the middle attacks.

“This may not be a concern when communications are limited to a private network with physical barriers; however, there usually comes a time when this data needs to be made available externally over public networks, and secure communications will need to be implemented.” 

IMPLEMENTING A DISTRIBUTED COMMUNICATIONS ARCHITECTURE

A company could implement a Distributed Communications Architecture in many ways, but Paine and Treat point to the OPC Unified Architecture (UA) standard, a multi-purpose set of services that a data collector – or OPC server – provides to application that is ready to consume information.

OPC UA is the interoperability standard for the secure and reliable exchange of data in the industrial automation space and in other industries. According to the OPC Foundation, which oversees development and maintenance of the standard, OPC is a series of specifications developed by industry vendors, end-users and software developers, which define the interface between clients and servers, as well as servers and servers, including real-time data access, monitoring of alarms and events, access to historical data and other applications.

When it was first released in 1996, the standard was intended to abstract programmable logic controller (PLC) specific protocols into a standardized interface, allowing human machine interface/supervisory control and data acquisition (HMI/SCADA) systems to interface with a “middle-man” who would convert generic-OPC read/write requests into device specific requests and vice versa, according to the OPC Foundation.

OPC UA specifications have been developed to address the challenges of security and data modelling that have accompanied the introduction of service-oriented architectures in manufacturing systems.

“With all these automation companies like Siemens and General Electric having native protocols with all of the different SCADA and HMI applications, you can’t write all these different protocol drivers to communicate with different types of devices,” said Steve Sponseller, product manager for business development with Kepware, in an interview with Rigzone. “Why not come out with a standard so applications on top only have to support the OPC standard?”

Kepware is seeking to address this issue with its KEPServerEX, which can be connected to several types of devices on a rig or wellsite through the driver layer. Using different drivers, Kepware pulls data from data bases, other OPC servers, from some HMI systems, and other sources through Kepware’s custom driver. Data is then passed on to connected clients from a rig or piece of equipment into a corporate historian, which might be centrally located at an office. Data is then available for playback when bandwidth is available. When bandwidth is not available due to weather, obstructions, other data going across, data continues to be collected locally and resumes playback to the historian once communications become available again. 

“There way there are no holes in the data, which is important for regulatory reporting and analytical purposes,” said Sponseller.  Kepware’s solution can be used to connect different drilling and production systems and devices, such as HMI software and .NET applications, devices that communicate with industry protocols such as PLCs and digital signal controllers (DSC), and in midstream, remote terminal units or flow computers that compute and store flow volumes and alarms.

The devices would act as monitors of equipment on a rig or wellsite – such as monitoring the vibrations and predicting when a piece of machinery will need maintenance before it breaks down – and also collect data from the operation itself. Examples of data being requested includes measurement while drilling data, which comes from sensors that are downhole with the drill bit as well as data on the rig such as weight on bit.

“In production, the same is true for monitoring the equipment like the artificial lift systems, pumps and valves as well as monitoring the production levels so that engineering can make decisions on how to increase performance or event shut in a well temporarily,” Sponseller noted.

CONTROL SYSTEMS IN O&G SYSTEMS ‘VERY DIFFERENT’

In manufacturing and process plants, control systems consist of the integration of HMI software, programmable logic controllers, distributed control systems, computers and a wide range of automation software through high-speed Ethernet communications.

However, control systems in geographically distributed oil and gas systems are very different. In these systems, SCADA is integrated with a more loosely integrated combination of control devices in the field, local HMI software, and wide-area communications that include wireless, fiber optic and telephone services, said Paine and Treat.

“In a typical manufacturing plant or even a refinery, you have a very defined space that is relatively manageable compared to a very spread out well field,” said Sponseller. “At a plant, it is not that difficult to run Ethernet cables throughout, connecting everything in real-time, where communications at millisecond speed is not an unreasonable request.”

However, it is not possible to run Ethernet out to all the sites for oil and gas operations, as well as water, waste water, and power transmission and distribution, due to limited bandwidths of radio, cellular and satellite.  

“We’re talking more about seconds to minutes polling frequencies,” said Sponseller. “There also might be many more devices to poll in a large oil or gas field compared to an isolated plant. For some companies, we’re talking to over 1,000 devices. That’s a lot of communications to try and achieve through a limited bandwidth communications telemetry.”

While Sponseller didn’t have an estimate on what types of bandwidth limitations that companies face, he noted that, no matter what the capacity of bandwidth exists, companies are maximizing it and needing more of it.

“The operational efficiencies they can gain by intelligently collecting and using data justifies the expense – not to mention safety and reporting for regulatory requirements,” he noted.

Depending on who owns the communications backbone, costs may be associated with the volume of data being transferred across the network, where the need or more data results in more operational expenses, said Paine and Treat.

“If the company owns its own backbone, I guess you could say its ‘free’, but that’s not really true as they have to provide the appropriate equipment and systems to accomplish the extra load of data,” said Sponseller. “Obviously, if you are ‘renting’ from someone else’s backbone, there is a cost that might be based on the amount of data.”

The growing potential for cyberattacks also means that data needs to be securely transmitted to ensure that sensitive data cannot be intercepted and used for malicious purposes.

“Together, these factors result in a complex and expensive architecture for remote communications within an oil and gas operation.”

In today’s manufacturing world, plants are sending their data up through the enterprise to corporate for company-wide efficiency communications telemetry. Companies now want to include data from the plant layer – or an oil and gas facility – throughout the enterprise, integrating with ERP and MES systems as well as databases for Big Data analytics and preventative asset maintenance, or enterprise asset management, and very soon, the industrial Internet of Things, where every piece of equipment will have some sort of sensor on it for collecting data and feeding it into the enterprise, Sponseller said.

“So similarly, if we start thinking about placing the data collectors out in the field, closer to the devices, this would have a similar paradigm to the manufacturing plants funneling the collected data up through the enterprise,” Sponseller explained.

One data collector with all the necessary protocols could gather data from a field – even a large offshore field.

“Because many operations in the remote oil and gas space are somewhat unmanned, they often double up on everything in case of a failure of any kind,” Sponseller explained. “So, they might actually put two devices out there in every field – one as a primary one and one as a secondary. Both would be physically connected to all the data sources, but only the primary would be actively polling the sources for data, unless it goes down. Then the secondary would pick up the polling.”

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.