<img height="1" width="1" src="https://www.facebook.com/tr?id=1101141206686180&amp;ev=PageView &amp;noscript=1">

Blog

Big Data in Labview 1

LabVIEW is a language most used to acquire data and display it on a user interface. This process is often described using the following three terms: (1) acquisition, (2) analysis and (3) presentation. Although the process is generally in the context of reading a couple of inputs and logging to a graph, it can also scale up by many orders of magnitude. Big data, put simply, is acquisition, analysis and presentation of a significantly larger data set in terms of time, number of individual data points or sources of the data set.

Any problem can be solved with an appropriate amount of data and an idea of how to use that data to generate a solution. Big data is a tool that can be used to solve a variety of problems with an end goal of reducing cost using the following process:

big_data_infographic.png

  1. Define process and parameters
    1. Select quantitative and qualitative parameters of the process to measure
    2. Identify tags to further identify those parameters (used for filtering)
    3. Use parameters to define metrics that can later be used after analysis
  2. Acquire process parameters
    1. Acquire parameters using data acquisition
    2. Acquire parameters as inputs for process automation
    3. Acquire inputs as time-centric real-world values
  3. Filter/Analyze big data
    1. Use filters to shrink the data set for focused trending
    2. Use metrics to quantify efficiency and cost
    3. Use human criteria to provide accountability
Big data can be used to find solutions for different industries and situations; some examples are provided below:
  • Predictive Maintenance (PDM): An application can acquire data from a Unit Under Test (UUT) at regular intervals and constantly analyze to detect an acute failure or degradation of performance over time.
  • Performance Analysis: Metrics can be used along with data acquired in real-time to create values representative of performance which can then be used to identify cost. This data could also be used to create a baseline to quantify the effect of process changes.
  • Simulation Models: A simple model can be created by playing back acquired data or a more sophisticated model can be created using acquired data to create a statistically similar model. This model can be used to create a control algorithm without hardware and/or avoiding additional edge testing that could be destructive and/or expensive.
Big data is a tool that can be used to solve problems, both technical and business-related with an overall goal of creating cost savings. Check back for the continuation of this blog series, in the next installment, we discuss the requirements and needs of a big data system.

Recent Posts:

An Introduction To Responsive Design
Publish Date 05 Apr 2017 Ashton ScaliseAntonio Alexander

If someone one were to say, they think it is very important for a website to be [..]

Failure Analysis: Is it Important?
Publish Date 05 Apr 2017 Nathan SzantoAntonio Alexander

Yes.

Communication Through Design - Making Assembly Easier
Publish Date 05 Apr 2017 Garret HallmarkAntonio Alexander

After dozens of hours designing an assembly and weeks of waiting for parts to [..]

The Virtual Water Cooler

We have all experienced many changes to our daily lives over the past few months. [..]

Implementing OAuth2 Authorization in LabVIEW
Publish Date 05 Apr 2017 John AmstadtAntonio Alexander

  What is OAuth2?

Oil and Gas from a Sales Perspective

I have been fortunate enough to work in the oil and gas industry for over a year [..]

Popular Posts

Posts by Topic

See All Topic