Tuesday 22 September 2015

Embedded Control: Reaching for More

By , Automation World Contributing Writer, on

Not just reserved for consumer gadgets, embedded systems make use of advances in computing performance to extend the power of industrial controls.

It's been 50 years since Gordon Moore first articulated his famous law about computing performance, yet his observation continues to withstand the test of time. The semiconductor industry’s ability to continually expand computing power into smaller spaces at lower prices has enabled incredible advances not only in consumer technologies, but industrial as well. Industrial automation is taking advantage of embedded technologies to consolidate electronics and extend the reach of processors to save money and boost productivity.

A case in point is the consolidation of the computing infrastructure inside the Glove Unique Reprocessing Unit (GURU) built by Pentamaster, an automation provider in Penang, Malaysia. The company has halved the number of industrial PCs driving the machine-vision modules in two of the unit’s seven workstations preparing used latex gloves for reuse.

The first vision module processes image data from four cameras in the automatic loading station to help position the gloves properly. The second vision module—in the fourth station, after chemical and thermal decontamination—also receives data from four cameras, but checks for cosmetic defects, reads the unique 2D code on each glove for traceability, and ensures that the glove is in the correct orientation for donning. The remaining three workstations find pinholes, decontaminate the gloves, and robotically package gloves that pass inspection and shred gloves that fail for recycling.

Pentamaster had found it necessary to upgrade the processors in the system to keep up with the evolving capabilities of machine vision. Executing complex algorithms on large data sets requires considerable processing power, and slow processing speeds restrict the number of high-resolution and high-frame-rate cameras that can be connected to an industrial PC.

To get the efficiencies that come with faster processing speeds, Pentamaster replaced the old processors in the vision modules with faster, multicore processors with integrated graphics processing. These third-generation Core i5 processors debuted Intel’s 3D tri-gate, 22 nm silicon architecture, running some important performance enhancements behind the scenes. Turbo Boost Technology 2.01, for example, adjusts processor speed automatically to match the required processing performance. Another example is the Hyper-Threading Technology that allows each processor core to work on two tasks simultaneously.

The enhancements reduced the inspection cycle to less than 2 seconds, boosting the inspection rate from 600 to 900 gloves/hr. The greater processing capability of the Core processor has also let Pentamaster consolidate the four industrial PCs supporting the vision modules to two. That, in turn, has simplified administration and maintenance, and has also reduced energy consumption and lowered the GURU’s operating costs.

Embedded remote diagnostics

The design team is already looking for other improvements, such as adding more testing routines to the set of vision processing algorithms already being used by GURU. Another avenue that the team is exploring is the Active Management Technology (AMT) designed into Intel’s vPro technology inside the Core family of microprocessors. This built-in remote management feature would allow Pentamaster to remotely manage, repair and protect the hardware in GURU’s computing infrastructure.

With this technology, operators or technicians can access outlying embedded systems remotely from their workstations and repair root-level software and firmware problems, including non-operational BIOS images and OS boot problems. To protect software and hardware that are known to be good, AMT allows supervisory users to install software remotely for making periodic updates, installing patches, and detecting and removing malware.

An important benefit of this technology is that a technician can diagnose problems remotely, even if the device containing the embedded system is not actually on, according to Shahram Mehraban, marketing director for Intel’s Industrial and Energy Solutions division. AMT can do this because it works below the operating system, a capability that differentiates it from conventional remote management software.

“Because conventional remote management typically relies on software that runs on an OS, the device must actually be running for someone to be able to remotely manage it,” Mehraban explains. “With AMT, even if the hard drive is corrupted and the device is turned off, you can still work below the OS to manage the device.”

This technology is useful in industrial PCs that run SCADA applications in manufacturing facilities and connect the factory floor to the company’s enterprise network and databases, Mehraban says. “We regularly have to do security and policy updates to various devices in the enterprise,” he says. “With this technology, we can extend this capability to perform them in our network of manufacturing facilities from a central location and administer all these patches at the same time.”

Although this remote technology is most commonly found on industrial PCs, it can be on any computing platform using Intel’s vPro processors. “A number of different applications on the factory floor today run on Intel silicon,” Mehraban notes. “Some high-end PLCs are based on Intel Core i5 and i7 platforms, and we have robotics and machine vision customers who are using our platforms.”

Supporting modular machine design

The ability to consolidate the computing systems within industrial equipment and add remote diagnostics under the operating system is not the only reason that embedded technology has become more attractive to industry in recent years. Another is that it supports a trend among machine builders to design and assemble their products from modular, off-the-shelf components. Because assembling equipment from pre-existing modules lowers design and assembly costs and shortens delivery times, modularity promotes customization and the building of increasingly more complex machines fitted with sophisticated technology.

“The complexity of machines is continuously increasing as end users push builders to increase productivity,” notes Sari Germanos, technology marketing manager for the Ethernet Powerlink Standardization Group. “Manufacturers are trying to do more with machines.”

Not only do they set more stringent internal timing and safety requirements for ever faster machines, but they also want these machines to consume less energy, support preemptive maintenance programs, and communicate with other machinery and inventory management systems, Germanos adds.

Embedded technology helps builders satisfy this demand by providing modular components like drives, inverters, sensors and HMIs with intelligence at relatively low cost. “The components are controlled and sequenced by a central PLC or industrial PC,” Germanos says. “They are connected together by one industrial Ethernet network, the architectural backbone of the machine.”

Intelligent components like drives rely on a mix of analog and digital application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) to execute the motion profiles specified by the controlling PLC. In these cases, communications within the machine typically occur by means of industrial Ethernet hardware, advanced software protocols, and timing controlled by high-frequency FPGAs and ASICs.

The more notable advances in embedded technology tend to tighten integration within a machine. An important example is the integration of dual-core ARM processors with FPGAs from Altera and Xilinx. “These devices allow for very fast, low-latency computing within the FPGA hardware, and integrate complex software algorithms on the dual ARM core,” Germanos says.

This integration has allowed automation vendors to introduce highly integrated multi-axis drives at reasonable prices. “In turn, these drives allow machine builders to control several motors at once from one drive, making a machine more cost-efficient with better synchronization,” Germanos notes.

Tighter integration at the silicon level also allows component manufacturers to integrate electronic and mechanical functionality. Take for example an integrated motor like the AcoposMotor module from B&R Industrial Automation. “In this case, the traditional motor, drive, gearbox and encoder are integrated into one package, thus simplifying the logic and computational power required to control all of the components,” Germanos explains. “The PLC communicates with the bundle via a standard API [application programming interface] provided to the master controller in a standard XML format.”

Another example that Germanos offers of this tight integration is a mixed-signal processor from Analog Devices. “Here, an ARM-based processor can handle both analog and digital signals from the same piece of silicon,” he says. “This is ideal for motor control, and it provides a fast digital interface for the open source Powerlink Industrial Ethernet protocol.”

No FPGAs needed

Yet another form of integration rooted in silicon embeds real-time communication accelerators for standard network protocols like Ethernet Powerlink, EtherCAT and Profinet. A notable development here is the Industrial Communication Sub-System (ICSS) that Texas Instruments (TI) puts in its Sitara family of ARM Cortex-A series processors.

This embedded peripheral has helped developers eliminate either dedicated, fixed-function ASICs or FPGAs that they would otherwise have to use for embedding the protocols for linking to deterministic, extremely low-latency industrial networks. “Pairing an FPGA with any general-purpose processor is common for implementing some communication protocols or low-latency I/O expansion when those features aren’t available on the host processor,” notes Adrian Valenzuela, TI’s marketing director.

“Seeing this trend in industrial automation increasing, we’ve implemented on-chip, low-latency accelerators specifically designed for replacing FPGAs,” he continues. Eliminating this external device not only saves between $2 and $10, but it also reduces the complexity of the system and development time.

TI has put the ICSS into its ARM-based Sitara processors because of the popularity of the fast, low-power ARM processor. “A modern device can contain a dual-core 1.5 GHz Cortex-A15 yielding 10,500 DMIPS [Dhrystone million instructions per second],” Valenzuela says, noting that TI has a growing portfolio of ARM-based devices that will be released in the future in both 32- and 64-bit configurations.

Another benefit of having an embedded communications accelerator like ICSS is that it brings a measure of modularity to the implementation of communications protocols. Users and vendors of industrial automation strive to adhere to communications standards to promote safety, but often find that getting and maintaining certifications for these standards as they evolve can be costly and time-consuming. Even worse, the continuing effort can impede time to market.

ICSS solves this problem by encapsulating several protocols into a Lego-like module that can be pre-certified. “This allows developers to focus their development time on the application,” Valenzuela says.

Juggling operating systems

Embedded technology makes another contribution to integrating the intelligent components in a piece of equipment and consolidating computing resources. Real-time operating systems (RTOSs) running on multicore embedded processors can execute specialized graphical and textual machine-control languages based on the IEC 61131 standard for PLCs. These RTOSs should also comply with the IEC 61508 standards to offer the redundancy required for ensuring a safety integrity level (SIL) of 3, according to Germanos.

Here, virtualization seems to have found an industrial application in helping the limited computing resources typically found in embedded systems to run several concurrent instances of operating systems, including RTOSs. Vendors are developing virtualization schemes that address the concerns over latency and reliability that have made industrial users reluctant to embrace virtualization in the past.

“The first generation of embedded virtualization was very difficult to implement,” Intel’s Mehraban notes. “We’ve come a long way in the past two or three years though. More of our customers are building multicore-based solutions that use hardware-based virtualization acceleration.”

Emerson Process Management, for example, has based the virtualization in its DeltaV controllers on a Dell PowerEdge VRTX shared infrastructure platform. To avoid any latency problems, the system uses Intel’s fast Xenon processors on up to four computing nodes along with Intel Virtual Technology (VT). A software arbiter known as a hypervisor assigns hardware to each guest OS according to a default scheme or to the user-defined rules. Intel’s Advanced Programmable Interrupt Controller (APIC) technology also helps by offloading interrupt management from the hypervisor.

“Because virtualization can consolidate the workflows of multiple operating systems, it can consolidate things like an HMI, a soft PLC, and maybe a motion controller on a single platform,” Mehraban says. “Consolidation reduces the number of devices you have to maintain and service, and it increases the overall reliability and uptime.” In many cases, it can also reduce cabling and other installation costs.

Besides consolidating platforms, virtualization can contribute to network security by creating a partition to isolate the core of the system software. On the other side of the partition, the system software acts like a guest firewall that communicates with the outside world. Only specific information can cross the partition.

Source : - http://www.automationworld.com/embedded-control/embedded-control-reaching-more

Tuesday 30 June 2015

Embedded Networking With CANopen

By Olaf Pfieffer, first published in Circuit Cellar

When it comes to Embedded Networking, Embedded Internetworking seems to be a trend and the only topic around these days. Although the idea of having all our embedded devices accessible via the Internet is tempting, for many embedded applications, Internet access does not solve the real communication requirements often set within the device.


Selecting a Communication Protocol

The requirements for more internal communication come from another trend: adding more intelligence to many machines, appliances and other devices. The side effect of this trend is, that more communication between I/O points or distributed control systems is required. For machine internal communication, TCP/IP is usually a complete overkill, especially if the embedded controllers are on the low end of the performance scale. More cost efficient solutions are serial protocols. Standard serial interfaces like UARTs, I2C or CAN are available on-chip with many microcontrollers in the 8-bit and 16-bit arena, allowing for an easy connection of several nodes.

One of the problems with implementing Embedded Networking solutions based on these serial protocols is, that by themselves they do not have a standardized application layer specifying how the data exchanged is structured and how or when it is exchanged. They usually just cover the Physical and Data Link Layers of the standard communication reference model.

That leaves anybody implementing an Embedded Network with these protocols most likely ending up with a proprietary solution. An internal communication specification has to be generated and most likely all the network nodes are built in-house. Outsourcing is difficult, due to the lack of available communication standards. Without higher layer communication standards it is not easily possible for third parties to build efficient off-the-shelf plug-and-play components.

However, the availability of off-the-shelf components for Embedded Internetworking becomes presently more and more important. In all industries we do feel the constant pressure to further shorten the development time and to cut the development costs.

Instead of developing all components from scratch a manufacturer of any machine could choose off-the-shelf sensors and actuators with a standardized networking interface, allowing the development focus to be on system integration and development of the components that bring the companies true IP into the product.

From all the on-chip communication interfaces available with many microcontrollers these days, CAN is the one that gets us closest to the scenario outlined above. Existing standards based around CAN allow for the availability of off-the-shelf components like generic analog and digital I/O devices and can still provide the network designer with enough freedom to optimize the overall system to best meet the communication requirements of a specific application.

Higher Layer Protocols

A variety of standardized higher layer protocols are available based on CAN. Today, the most significant ones are DeviceNet and CANopen. DeviceNet was developed for factory automation and is strongest in the arena of material handling. Although it offers a very high level of off-the-shelf plug-and-play product availability, there is a price to pay: DeviceNet does leave only minimal room for customization, optimization and other tweaking of the network. For all applications where customization is desired, CANopen is the better alternative.

CANopen

The basic idea behind CANopen is simple: CANopen standardizes the way the communicated data is structured and exchanged. In addition several Device Profiles for CANopen are standardized and new ones get constantly added. Device Profiles specify the data sets and communication models supported by modules such as Generic I/O, Encoders, Drives, etc. The way the CANopen standards work, it also supports building off-the-shelf modules for plug-and-play system configurations, however it leaves plenty freedom for customizing nodes and communication paths. This allows manufacturers of devices with internal Embedded Networking to take advantage of off-the-shelf components where suitable, and still be able to tweak the system for optimized price/performance of the system to be able to keep a competitive edge.

The Object Dictionary Concept

The core of any CANopen node is the Object Dictionary, a lookup table with a 16-bit index and an 8-bit sub-index. This allows for up to 255 sub-entries at each index. Each entry can be variable in type and length.

All process and communication related information/data is stored as entries in pre-defined locations of the object dictionary. Unused entries do not need to be implemented.
Object Dictionary example
From the network, object dictionary data of any node can be accessed in a point-to-point communication mode by issuing read or write requests to the node's object dictionary. Messages that contain requests or answers to/from the object dictionary are called Service Data Objects (SDO). As both process and configuration data are part of the object dictionary, this communication scheme immediately allows for configuring nodes and/or getting access to the process data.

Point-to-Point? Variable length? More than 8 bytes?

Those of you familiar with CAN probably have these questions after reading the previous paragraphs, as CAN itself does not really support these features - so how does CANopen do it?

Any message sent on CAN is a broadcast to all nodes. Which message IDs get used by which node is not part of the CAN specification and is usually determined by the application.

To allow for peer-to-peer communication, CANopen introduces a node ID that gets embedded into the SDO requests to the object dictionary. Each CANopen module on the network must have a unique node ID in the range from 1 to 127.

The default scenario is, that any node has two CAN identifiers reserved for SDO requests to and SDO replies from the object dictionary. The default CAN ID for the Receive Service Data Object (RSDO - used to send requests to a node) is a base address of 600h plus the node ID number. The ID for the Transmit Service Data Object (TSDO - used by a node to reply to requests) is a base address of 580h plus the node ID number. In this scenario, RSDOs may only be used by one node (usually the master, sometimes a configuration tool), to avoid conflicts/collisions arising from multiple nodes potentially trying to access a specific object dictionary at the same time.

The master or configuration tool can now scan for connected devices by sending 127 RSDO requests for the identity object - one to each potential node. All nodes present will respond with their TSDO containing the identification data from their object dictionary.

In case an object dictionary entry does not fit into one message (CAN has a limit of 8 bytes per message), the data transfer gets automatically fragmented. In this case, the first data byte is used as a control byte for handling the fragmentation. The remaining 7 bytes can be used for data transmitted with each fragment.

Device Profiles

Although the object dictionary concept allows for structuring the data that needs to be communicated, there is still something missing: Which entry in the dictionary is used for what? The dictionary is far too big that we could allow the master to take "wild guesses" and simply try to access certain areas of the dictionary to see if they are supported.

The solution is simple: First of all, there are a few mandatory entries that all CANopen nodes must support. These include the identity object with which a node can identify itself and an error object to report a potential error state. Additional entries are specified by device profiles. Device profiles describe all the communication parameters and object dictionary entries that are supported by a certain type of CANopen modules. Such profiles are available for generic I/O modules, encoders and other devices.

A master or configuration tool can read-access the identity object of any slave node using a SDO. As a reply, it receives a SDO with the information about which device profile a module conforms to. Assuming the master knows which object entries are defined for a particular device profile, it now knows which object dictionary entries are supported and can access them directly.

CANopen is open! If an application requires the implementation of non-standardized, manufacturer specific object dictionary entries, then that is not a problem. Adding entries that disable/enable a certain functionality that is not covered by one of the existing device profiles can be implemented to any device, as long as they conform to the structural layout of the object dictionary.

Electronic Data Sheets

Electronic Data Sheets (EDS) offer a standardized way on specifying supported object dictionary entries. Any manufacturer of a CANopen module delivers such a file with the module, which in layout is similar to the ".ini" files used on Microsoft Windows operating systems.

A CANopen master or configuration tool running on a PC with a CAN card can directly load the EDS into its set of recognized devices. Once a device is found on the network, the master or configuration tool will try to find the matching EDS. Once found, all supported object dictionary entries are known by the master / configuration tool.

Figure 2 shows the relation between Device Profiles and Electronic Data Sheets. The Device Profile specifies the minimum entries that need to be supported by a device conforming to the profile. In addition, the EDS might specify objects that are specific to a certain manufacturer or sub-type of modules.
CANopen master node configuration

Increased Performance With Process Data Objects

So far, we structured the configuration and process data in a way, that a master could easily access it. From the process point of view, the following operating mode would be possible: polling all inputs, work on the data and then write to all outputs. For most applications, this would not be an efficient communication model. As CAN supports the multi-master concept (any node could send a message at any time, collisions are resolved by ID priority), we can expect a more direct, higher priority access to the process data.

PDO mapping

A Process Data Object (PDO) is a "shortcut" to the process data in the object dictionary. Via PDO mapping (all done through object dictionary entries), any dictionary entry can be mapped to data in a PDO, to a maximum of 8 bytes per PDO.

Let's have a look at the example of figure PDO Mapping: A CANopen input node supports 2 digital inputs of 8-bits each and 2 analog inputs of 12-bits each. In conformance with the Device Profile for Generic I/O modules, an object dictionary entry at 6000h stores the 2 digital inputs of 8-bits each and an entry at 6401h stores the 2 analog inputs as 2 words.

The object dictionary entry at 1A00h specifies the PDO mapping - which bits of which object dictionary entries are used in the Transmit PDO 1 (TPDO1), filling the TPDO bit-by-bit. Note that this mapping can really be done on a bit-level. Each entry starts using the first available, free bit in the PDO and occupies as many bits as it requires.

The first sub-index entry at 1A00h maps object 6000h, sub-index 1, 8 bits to the first bits of the TPDO1. The next sub-index entry at 1A00h maps object 6000h, sub-index 1, 8 bits to the next free bits of the TPDO1, and so on. In this example the remaining bits of TPDO1 (data bytes 6-8) remain unmapped and unused.
PDO mapping

Which PDOs are predefined and what default mapping is used is also specified in the Device Profile. If the mapping does not change during operation we call that static mapping. Dynamic mapping is the process of re-mapping a PDO during run-time. Obviously, dynamic mapping is more complex and adds more overhead to the PDO processing time.

PDO Triggering

Now that we have a shortcut to several dictionaries entries in one message, what are our options to trigger a PDO? CANopen supports a total of 4 trigger modes:
  1. Event driven: If the input device recognizes a change-of-state (COS) on any of its inputs, it updates the data in the object dictionary and the PDO and transmits the PDO. This mode allows for some of the fastest response times.
  2. Time driven: A PDO can be configured to transmit itself on a fixed time basis, for instance every 50 milliseconds. This mode helps to make the total busload more predictable.
  3. Polling: Using a regular CAN feature, the remote request frame, a PDO only gets transmitted if the data was specially requested by another node.
  4. Synchronized: A special mode allowing for a synchronized polling as required by many motion control applications.

PDO Linking

When it comes to the communication partners involved, we have a similar arrangement as with the SDOs. The default is that the master is the only node that receives Transmit Process Data Objects (TPDO). And only the master may send Receive Process Data Objects (RPDO) to the slaves. In other words, we ensure that a pre-defined connection set is usable by default, as we assign unique CAN message identifiers to each supported PDO - one unique ID for each TPDO and one for each RPDO.
Predefined PDO connections
During the initialization and configuration cycle, the PDO linking can be changed. A master could inform one or multiple output modules that they should directly listen to a specific TPDO of an input module. Again, a TPDO correlates to a unique CAN message identifier. So we basically just inform a node to which message frames it should listen and which ones it can ignore.

Once these new linking settings are made and the network goes into the operational mode, the master would not need to get involved into the process data communication and could focus on other things like network management.
Dynamic PDO linking

Source :- http://www.esacademy.com/en/library/technical-articles-and-documents/can-and-canopen/embedded-networking-with-canopen.html

Monday 18 May 2015

Electrical Automation: PLC Scada Training Institute For Btech/Diploma Students

Programmable Logic Controller (PLC)

A programmable logic controller (PLC) or programmable controller is a digital computer used for automation of industrial processes, such as control of machinery on factory assembly lines. Unlike general-purpose computers, the PLC is designed for multiple inputs and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed or non-volatile memory. A PLC is an example of a real time system since output results must be produced in response to input conditions within a bounded time, otherwise unintended operation will result.
Hence, a programmable logic controller is a specialized computer used to control machines and processes.  It therefore shares common terms with typical PCs like central processing unit, memory, software and communications.  Unlike a personal computer though the PLCis designed to survive in a rugged industrial atmosphere and to be very flexible in how it interfaces with inputs and outputs to the real world.
The components that make a PLC work can be divided into three core areas.
  • The power supply and rack
  • The central processing unit (CPU)
  • The input/output (I/O) section
PLCs come in many shapes and sizes.  They can be so small as to fit in your shirt pocket while more involved controls systems require large PLC racks.  Smaller PLCs (a.k.a. “bricks”) are typically designed with fixed I/O points.  For our consideration, we’ll look at the more modular rack based systems.  It’s called “modular” because the rack can accept many different types of I/O modules that simply slide into the rack and plug in.
clip_image002
Figure 1 Power supply and Rack
clip_image002[5]The
Figure 2 Backplane
Rack
The rack is the component that holds everything together.  Depending on the needs of the control system it can be ordered in different sizes to hold more modules.  Like a human spine the rack has a backplane at the rear which allows the cards to communicate with the CPU.  The power supply plugs into the rack as well and supplies a regulated DC power to other modules that plug into the rack.  The most popular power supplies work with 120 VAC or 24 VDC sources.
The CPU
The brain of the whole PLC is the CPU module.  This module typically lives in the slot beside the power supply.  Manufacturers offer different types of CPUs based on the complexity needed for the system.
The CPU consists of a microprocessor, memory chip and other integrated circuits to control logic, monitoring and communications.  The CPU has different operating modes.  Inprogramming mode it accepts the downloaded logic from a PC.  The CPU is then placed in run modeso that it can execute the program and operate the process.
Since a PLC is a dedicated controller it will only process this one program over and over again.  One cycle through the program is called a scan time and involves reading the inputs from the other modules, executing the logic based on these inputs and then updated the outputs accordingly.  The scan time happens very quickly (in the range of 1/1000th of a second).  The memory in the CPU stores the program while also holding the status of the I/O and providing a means to store values.
clip_image002[7]
Figure 3 Components of a PLC
PLC Scada Training institute in Noida with 100% Placement Assistance..

For Know are Training Institute Address give a miss call Us 
any time:- +91-7065762590