Press Release (ePRNews.com) - HOD HASHARON, Israel - Oct 18, 2016 - Propelled by increased digitalization and the emergence of the Internet of Things (IoT), building automation systems are undergoing a significant transformation. Central to this transformation are smart sensors that provide ample data that can be used to make buildings smarter. Actionable data coming from sensors such as motion detectors, photocells, temperature gauges, CO2 and smoke detectors are used primarily for energy savings and safety. Next generation buildings, however, are intended to be significantly more intelligent, with the capability to analyze space utilization, monitor occupants’ comfort and generate business intelligence. In order to support such robust features, building automation management systems require considerably richer information that details what is happening across the building space. Since current sensing solutions are limited in their ability to address this need, a new generation of smart sensors is required to enhance the size, accuracy, reliability, flexibility and granularity of the data they provide.
An immediate challenge associated with the new generation of IoT sensors is the building networks that need to effectively support larger and richer sensor data. With the evolution of the IoT, a new paradigm for building automation is introduced that supports a decentralized architecture in which a great deal of analytics processing can be done at the edge (the sensor unit) instead of in the cloud or a central server. This computing approach, often called “edge computing” or “fog computing,” provides real-time intelligence and greater control agility while at the same time off-loading heavy communications traffic. This is especially relevant for image-based sensors, which are capable of generating an exceptional amount of valuable information.
New developments in computing technology have yielded cheap and energy-efficient processors that are suitable for such data processing—that is, analyzing the sensor data inside the sensor unit itself. Successful implementation of this approach would enable the final summary of the analysis (rather than the raw data) to be sent over the network, resulting in a lower volume of network traffic and a shorter response time. Yet, the challenge of rich data analysis using low-power and affordable processors is not insignificant.
Approaches to Data Analysis
Fundamental to addressing the inherent challenges of rich data analysis is understanding the contrasts between conventional rule-based systems and data-driven systems. Rule-based systems, sometimes called “expert systems,” exhibit inferior performance and are slower in adapting to new types of data (for example, from an upgraded sensor or a new sensor of previously untapped data), or changing domains (for example, a new style of furniture or new lighting conditions). Moreover, although rule-based systems are supposedly easier to analyze, as the system evolves patches of rules are layered upon each other to account for myriad new rule exceptions, often yielding a hard-to-decipher “spaghetti code” of rules.
Data-driven systems use a different paradigm according to which computer programs describe the data to be matched and the required processing. One of the better approaches to rich data analysis is the use of data-driven Machine Learning systems, particularly when cameras are employed at the sensing layer. With these systems, the burden of defining effective rules is transferred from human experts to the algorithm. Unlike rule-based systems, in which rule creation and modification are assigned to human programmers, in data-driven systems humans are tasked only with defining the features of the raw data that they believe hold relevant information. Once the features have been defined, the formulas that use these features are learned automatically by the algorithm. For this to work, the algorithm needs to have access to a multitude of data samples labeled with the desired outcomes, so that it can properly adapt. The sensor, deployed with the learned formulas, repeatedly runs a two-stage process: first, the human-defined features are computed from the sensor data; then, the learned rules are applied to perform the task at hand.
Within Machine Learning, Deep Learning is an advanced approach that further automates the computational process, whereby even the burden of defining features is removed from the human programmer. With Deep Learning, the algorithm defines an end-to-end computation—from the raw sensor data all the way to the final output. Using a sophisticated neural network that comprises a complex computational circuit with millions of parameters, the algorithm “figures out” for itself what the correct features are and makes adjustments until it zeroes in on the right function.
The Deep Learning approach has proved to be highly effective in various application areas such as speech and audio processing, information retrieval, natural language processing and object recognition. The successful implementation of Deep Learning in computer vision further supports its promising use in smart image-based sensors for building automation. In summary, the main advantages of Deep Learning are as follows:
For more information about building automation, visit: http://www.pointgrab.com. Source :