<img height="1" width="1" src="https://www.facebook.com/tr?id=1101141206686180&amp;ev=PageView &amp;noscript=1">

Blog

Big Data in Labview 1

LabVIEW is a language most used to acquire data and display it on a user interface. This process is often described using the following three terms: (1) acquisition, (2) analysis and (3) presentation. Although the process is generally in the context of reading a couple of inputs and logging to a graph, it can also scale up by many orders of magnitude. Big data, put simply, is acquisition, analysis and presentation of a significantly larger data set in terms of time, number of individual data points or sources of the data set.

Any problem can be solved with an appropriate amount of data and an idea of how to use that data to generate a solution. Big data is a tool that can be used to solve a variety of problems with an end goal of reducing cost using the following process:

big_data_infographic.png

  1. Define process and parameters
    1. Select quantitative and qualitative parameters of the process to measure
    2. Identify tags to further identify those parameters (used for filtering)
    3. Use parameters to define metrics that can later be used after analysis
  2. Acquire process parameters
    1. Acquire parameters using data acquisition
    2. Acquire parameters as inputs for process automation
    3. Acquire inputs as time-centric real-world values
  3. Filter/Analyze big data
    1. Use filters to shrink the data set for focused trending
    2. Use metrics to quantify efficiency and cost
    3. Use human criteria to provide accountability
Big data can be used to find solutions for different industries and situations; some examples are provided below:
  • Predictive Maintenance (PDM): An application can acquire data from a Unit Under Test (UUT) at regular intervals and constantly analyze to detect an acute failure or degradation of performance over time.
  • Performance Analysis: Metrics can be used along with data acquired in real-time to create values representative of performance which can then be used to identify cost. This data could also be used to create a baseline to quantify the effect of process changes.
  • Simulation Models: A simple model can be created by playing back acquired data or a more sophisticated model can be created using acquired data to create a statistically similar model. This model can be used to create a control algorithm without hardware and/or avoiding additional edge testing that could be destructive and/or expensive.
Big data is a tool that can be used to solve problems, both technical and business-related with an overall goal of creating cost savings. Check back for the continuation of this blog series, in the next installment, we discuss the requirements and needs of a big data system.

Recent Posts:

Filtering Basics: Importance of Linear Phase
Publish Date 05 Apr 2017 Jason ThaiJason Thai

Linear phase and computation/memory complexity are important characteristics to [..]

Revisiting OAuth 2 in LabVIEW
Publish Date 05 Apr 2017 John AmstadtJohn Amstadt

Recap In my previous blog, we took a look at how to implement OAuth2 in LabVIEW. [..]

Engineering a Better 3D Print (Part 1)
Publish Date 05 Apr 2017 Michael MaloneyMichael Maloney

3D Printing and its widespread use has been a long time coming and seems to have [..]

NVCC – Intro to Utilizing GPU Power to Offload the CPU Part 3
Publish Date 05 Apr 2017 Jack BakerJack Baker

Assumptions: Machine has a Nvidia CUDA Core GPU (such as a GeForce) with installed [..]

How I Learn New Skills for Personal Growth
Publish Date 05 Apr 2017 Bryce UrestiBryce Uresti

Learning new skills can be quite the task especially when there's already so much [..]

Controlling the Supply Chain Dream
Publish Date 05 Apr 2017 Thomas MathewThomas Mathew

You are out with your friends, bird watching, and nothing could be more peaceful. [..]

Aligning Data in WAVE
Publish Date 05 Apr 2017 Rohama KhadijaRohama Khadija

When analyzing data from multiple sources, we often find that the time series data [..]