<img height="1" width="1" src="https://www.facebook.com/tr?id=1101141206686180&amp;ev=PageView &amp;noscript=1">

NI Week 2017 (2 of 3)

Erdos Miller had a great time this past week at NI Week 2017, we got to sit down at a variety of sessions and have summarized our notes from several sessions divided into three blogs. 

This blog contains notes regarding:

  • Session AD0936 which introduced functional safety modules to the cRIO platform
  • Session AD0586 which re-visits the idea of the Queue Driven State Machine
  • Session 0183 which covers a way to create a flexible system using the actor framework
  • Session 0536 which covers a team’s experience in when to refactor code

Introduction to Functional Safety on cRIO
Chris Johnson

From a new product standpoint, this was one of the most exciting reveals for me.  National Instrument’s new line of functional safety module helps shrink the TCO of industrial systems by giving the cRIO the tools to provide functional safety: a function once specific to the PLC and Safety Relay market.  By using functional safety modules, companies can reduce the cost of systems and have the choice to invest in a single hardware platform and programming discipline.

National Instruments opened the ‘functional safety’ market to NI products using a new set of SIL (Safety Integrity Level) rated modules associated with IEC 61508. The NI cRIO line of products has always been at odds with their indirect competitors: the PLC.  One of the ways (until recently) that PLCs are used in areas that cRIOs are not have to do with functional safety.  Hardware like safety relays contain specific logic inside them that has been certified to be safe under certain circumstances.  This provided companies and establishments that used them to have a level of certainty regarding its ability to perform in off-nominal situations.

The new C-Series module (NI 9350 and NI 9351) contain on-board FPGAs that allow programmers to create logic using the Functional Safety Editor to achieve out-of-band safety specific logic that is SIL certified providing the ability for cRIOs to provide superior high speed data acquisition, control AND safety.

Rebirth of the LabVIEW State Machine aka Why Your QDMH Stinks
Norman Kirchner; Aaron Ryan

This session was an incredibly cool idea that turns the Queued Message Handler on its head. It presents several ideas in which developers implement a QMH incorrectly. One of the easiest ways to sum up the idea of the session is the following [paraphrased] quote: “A Queued Message Handler is not a State Machine”. Using this modified QMH, costs can be saved when bringing on new developers and teaching them program flow as well as time saved during debugging as the layout for the program becomes significantly easier to follow.

It goes on to provide an architecture where a state machine implemented using a queued messaged handler is split into multiple paths of logic that make it easier to understand the flow of the state machine at a glance. Some quick notes about it are below:

  • You must identify states/modes, triggers, and actions/behaviors
  • Actions only happen during transitions
  • Triggers cause transitions

By organizing your code like this, it becomes easier to prevent actions from occurring when they shouldn’t and making it easier to add additional states and know where actions can occur and under what circumstances.

The Top Level Baseline architecture can be found at: https://lavag.org/topic/16188-tlb-top-level-baseline-prime-application-template/

Process Control for the Actor Framework
Bob Cummer; Asa Croasmun; Joe Zolnowski

When I attended this session, I expected to witness best practices for implementing Actor Framework in process control. I was interested in seeing how the speakers handled state machines, sequencing, and timing. The actor framework provides a solid software architecture for providing consistent testing of code, scalability, and a consistent messaging service. In process control, use of the Actor Framework can greatly reduce the time required to incorporate new devices into a process and modify existing recipe batch processing.

However, the session ended up being a high-level overview of the decision making that lead the team to implement the Actor Framework and a very brief overview of the design of the RT target and the user interface.

The team needed an architecture that could implement ANSI/ISA-88 standards for process control. In addition, they needed an architecture that could handle minute intricacies between different types of hardware that provided the same functionality, and an ability to execute batch process recipes. The team chose to use the Actor Framework for its object-oriented nature, its defined messaging framework, and the support provided by National Instruments.

On the RT implementation, Asa developed a “Main Actor” that was dynamic enough to be configured through INI files. This actor is responsible for translating the user-configured recipes and launching the necessary nested actors to carry out recipe commands.

On the desktop implementation, Joe developed a light-weight user interface that consisted of two panes with a sub-panel in each. Each sub-panel is controlled by a UI actor and these UI actors can be created on an as-needed basis. Joe then modifies an INI file to instruct the UI as to which actors should be used for the sub-panels. This approach allows for a decrease in the implementation of new user interfaces.

Both Joe and Asa claim to spend more time modifying configuration files than programming new actors due to the dynamic nature of their Actor Framework implementation.

When to Trash Your Code: Lessons from a LabVIEW OOP Framework Refactor
Marcus Gavin; Chris Cilino 

This was one of the more entertaining sessions that I attended. Chris and X discuss the process of refactoring their CLIVE system at Cirrus Logic. The walked step-by-step through the decisions they made and the information they had when deciding what code to keep and what code to refactor. The session provides a first-hand look at what can happen in terms of development and ultimately cost when a client (or management) suggest a single change.

The first step in the process was to define the rate of iteration they wanted in releasing new software updates. The choices were quick turn iteration where they would release frequent small updates or long design cycle iteration where they would implement larger packages of functionality and feature updates on a slower timeline.

Next, the team went back to the original feature scope. The goal was to identify problems in the original scope that have arisen after the initial software release, as well as identify the reasons for the sacrifices made in the original scope.

After that, the team went to work on creating a new project scope meant to tackle the issues discovered with the original scope. One example is the original scope provided support for testing one DUT at a time. As the team discovered, the testing process now desired the ability to test multiple DUTs at a time.

Once the new scope was defined, the team began analyzing the existing source code step by step. The first category was the user interface. The team asked themselves “What parts of the user experience can meet the new requirements?”. They quickly discovered that the UX was a tab-based experience that did not easily scale to the configuration of multiple DUTs. The decision was made to scrap the entire UI in favor of a re-write that made use of trees and sub-panels.

The next category to analyze was the software architecture. The existing architecture provide an event-based messaging system with all event references stored in a functional global. Each module would register for these events and handle them in a queued state machine. One issue with the architecture was that messaging could not be tracked. It was impossible to state who was sending messages to who at any given time. The team noted that the Actor Framework could provide more control over the messaging. However, Chris identified that the average LabVIEW experience among the team was at a CLAD level and the team was not ready to implement the Actor Framework. They chose not to refactor the architecture.

Finally, the team looking at the software modeling of the hardware. The existing software implemented some object-oriented programming techniques. However, they ended up with one master class that contained all the system information. The team decided to keep the master class, however, its private data was refactored and migrated into additional classes representing hardware on the DUT.

Useful links provided in the lecture:

LabVIEW Champions: http://forums.ni.com/t5/LabVIEW-Champions/ct-p/7029
LabVIEW Object Oriented Programming Resources: http://forums.ni.com/t5/LabVIEW-Development-Best/LabVIEW-Object-Oriented-Programming-Resource-Directory/ta-p/3523820
Variant Attributes: LabVIEW’s Best Kept Secret: http://www.ni.com/webcast/3654/en/