April 29, 2021 - MT (Mexican Pacific Time, GMT-07:00)
Hosted at the very beginning of both days, our wellbeing sessions will set you up for a day filled with learning and networking.
- Discover how definitions for edge computing will continue to evolve and the impact this will have on engineers building embedded systems for the edge
- Understanding the vast array of connectivity options to tailor your device build so the tech will run smoothly as well as looking at which option fits best
- Dive into AI at the edge: A look at the different types of hardware AI accelerators
Code size of embedded application has been a concern for a very long time. While storage becomes cheaper and smaller, developers find creative ways to increase code size by adding features or unnecessary software engineering. Compilers have come a long way in optimizing applications for code size. While most compiler optimization work were focused on application performance, we have seen increase in the code size optimizations in recent years.
This session will cover classical as well as recent compiler optimizations for code size, a few of which Aditya has implemented in the LLVM compiler. Some optimizations (hot cold splitting, function entry instrumentation) require collecting data from the field while the application is running. The presentation will provide an overview of how those compiler techniques help reduce code size. We will also explore some tips and techniques (compiler flags to reduce code size, tuning of compiler options like inline threshold), that help reduce binary size. Having knowledge of the code generated by the compiler and the instruction set architecture can help engineers chose appropriate programming abstractions and idioms.
- Optimize applications for code size using available compiler techniques and software engineering techniques
- Various code-size and performance trade offs
- Understanding the code size requirements embedded application
Edge computing enables AI inference processing to happen right where the data is generated. One of the most demanding applications of AI is computer vision. In this session, you will experience a walkthrough of what it took to create a real-time computer vision application that runs entirely at the edge. Learn what edge computing infrastructure is required to make it work and manage it remotely. Gain understanding of what compute hardware and software layers were used and what open source options are readily available for you to build your own edge AI solutions. Take away a realistic set of expectations around solution delivery timeframes and infrastructure requirements.
- Understanding of the whole architecture needed for a managed application that does real-time AI at the edge
- Knowledge of what AI processing on streaming video frames requires
- Understanding of processing speeds, frame rates, and product delivery timeframes
As data volume grows, so does the cost of transmitting, storing and processing data. ONE Tech's MicroAI™ technology is revolutionizing the industry by training at the endpoint. By embedding AI onto devices as small as an MCU, OEMs and asset owners can greatly reduce the amount of data that leaves the asset. Learn about this transformational shift in the industry of Embedded AI during this session.
The Edge is anywhere and everywhere outside the corporate center and the Cloud. This means edge nodes can be deployed in a wide variety of environments, where they will possibly face dangers such as humidity, vibrations, dust and many others. Their physical location also exposes them to tampering, theft and even complex network security threats. This panel will explore the various security challenges of Edge Computing and discuss the value of potential solutions, such as root of trust, device quarantines, network segmentation, data encryption at rest and many others.
With more and more homes become smart and IoT devices become powerful, most cloud computing jobs can be shifted inside the home, saving the user on privacy and latency. When heavier AI models are deployed on IoT Devices, Data Parallelism using multiple devices minimizes the latency further. Models behave differently on different devices due to a variety of factors, some being user-based, others being device-specific. Analysis of these patterns helps identify the best devices for specific models. In a smart home scenario 'device churn' on powerful devices like mobiles is a factor that needs to be considered using a cost-benefit analysis to prevent any device currently in use from churning out.
We propose a heuristic-based methodology for tracking and using "user-device interaction patterns", "Model Specific device behaviours" and "Static and Dynamic device capability scores" to estimate runtimes of models on various devices. Using the Estimates of time and Churn probability of devices selecting best devices and distributing data in the most optimal way to get the overall fastest response times. The technique also proposed a self-learning process for the system to become better over time, with the flexibility of adding and removing devices dynamically.
What are the challenges for today's Industrial IoT applications?
- Products: Industrial grade sensors - what do you need and why?
- Solutions: How to log sensor data directly from nodes or through a cloud application
- Create value: Move from condition monitoring to predictive maintenance with AI & Machine Learning
This session will explore the challenges of coding and debugging complex machine-learning (ML) and artificial intelligence (AI) systems. We will cover how to boost code performance and how to use the advanced debugging and trace capabilities in conjunction with various machine learning and deep learning models and algorithms. We will also look at why code quality is such a major issue with machine learning and how you can future-proof your source code. This session is a must-attend for embedded developers that want to improve the ML and AI models and algorithms for IoT applications.
- Explore how optimized code can positively affect the machine-learning code that is very compute intensive
- Learn how structuring your code effectively can improve your application's optimization and how code quality can have a huge impact on compiler optimization of source code
- Understand how to overcome the challenging process of debugging optimized code
• Understand the importance and advantages of edge compute, convergence of IOT with ML
• Appreciate the challenges of performing ML on embedded targets, especially one as small as the Arm®Cortex™M0+
• Learn the basic concepts of embedded ML pipeline: sensors data representation -> features extraction -> inference
• Understand the basic concepts of ensemble algorithms
• See Qeexo AutoML in action with the Arduino Nano 33 IOT as a reference target
Advancements in traditional compute combined with inclusion of power-efficient AI acceleration fabrics at the edge and within endpoints open up exciting new possibilities for managing the intelligence life-cycle of a system. There is a shift from a cloud-centric intelligence model to a more distributed intelligence architecture. While big-data workloads continue to be cloud centric there is a lot of demand for efficient small-data workload management right at the source.
Being able to run AI/ML workloads within tiny machines (TinyML) combined with how we are re-thinking our lives post COVID has led to some interesting market dynamics. Some of the areas and use-cases that are seeing disruption are using Voice as a User Interface for human-to-machine communication, environmental sensing and predictive analytics and maintenance.
Inference engines running on tiny computers within endpoints now enable far more efficient data handling and analytics right at the source, improving data gravity. Embedded intelligence within end points also means improved response times, reduced network data transport requirements and removal of the need to be persistently connected to the edge or cloud.
Head over to the Omdia booth on the Swapcard platform to ask your most pressing embedded systems and IoT questions from our market leading analysts.
aicas will discuss recent insights from multinational market research and how the smart edge solves this.
Based on market research performed by aicas, we will discuss the main challenges of the edge and how to address these. Challenges like security, latency, and others pose a hurdle for the simple adoption of IoT and edge computing. Based on existing customer architectures, aicas will discuss how these hurdles can be overcome easily providing a shorter time.
Edge computing enables robotics applications to offload some of their tasks to nearby computing facilities in order to (1) enhance robots' capabilities (e.g., for data-intensive analytics) and (2) improve overall planning and coordination (e.g., for a fleet of robots). By doing so, the whole robotics application becomes distributed with different components running on separate hardware and potentially different networks. Nevertheless, all these software components are expected to behave and be managed as a single and cohesive robotics application.
This talk aims to shed light on how a unified management of an edge robotics application can be effectively achieved by Eclipse fog05, an open-source project that falls under the umbrella of the Eclipse Edge Native Working Group. Eclipse fog05 provides a decentralised infrastructure for provisioning and managing compute, storage, communication and I/O resources available anywhere across the network, from the far edge up to the cloud. Moreover, Eclipse fog05 addresses highly heterogeneous systems like the ones find in robotics, where embedded controllers on the robots need to interact with powerful server at the edge.
A live demonstration will be presented showing how a teleoperation application based on ROS2, a popular robotics development framework, can be deployed by Eclipse fog05 across multiple geographical locations in France and Germany. The demonstration will walk you through the following aspects: (1) how to package a ROS2 application for Eclipse fog05, (2) how Eclipse fog05 automatically instantiates the ROS2 application on the robot, on the edge and on the cloud to allow you to remotely drive the robot.
- Understand how robotics applications can significantly benefit from edge computing
- Discover how Eclipse fog05 can provide a unified management of robotics applications at the edge
- Learn how a robot can effectively driven from a remote location
- The code of the demonstration is available as open source for anyone to explore
Edge computing is poised to be a driving force in networked intelligence in the coming decade. This talk will discuss the migration of selected workloads from the cloud to a hybrid edge model. It will study several representative use cases that strongly benefit from edge computing. Requirements for edge computing in the network, box and node levels will be addressed. Several architectures for edge deployments will be considered, including the Industrial Internet Consortium's edge reference designs. Implementation choices will be explored, including type of processor(s), storage, wired, optical and wireless networking, software backplanes, security, reliability and management.
- Edge computing is a key emerging technology, poised to be as important to us during the next ten years as cloud has been for the last decade
- Critical requirements in areas like latency, network bandwidth, security, privacy, safety, reliability and resilience will drive edge architectures
- There are architectural tradeoffs to consider in how edge computing nodes and networks are deployed, and how compute workloads are mapped onto them
In the past two decades, since its inception, Bluetooth® technology has undergone many enhancements and changes that have allowed it to adapt to current and future market needs.
While most associate Bluetooth technology with wireless audio streaming applications, it has recently adapted to provide a wide range of flexibility for developers to utilize it for a wide variety of applications across consumer, commercial and industrial use cases.
Some of the important new and upcoming features are:
- Long-range mode
- High-speed mode
- Direction finding: Angle of Arrival and Angle of Departure
- Bluetooth mesh networking
- The upcoming release of LE Audio
In this talk, I give a brief introduction to these recent enhancements and explore the different ways Bluetooth technology has adapted to provide developers with the flexibility to develop solutions that address applications in different industry verticals.
Ultra-wideband (UWB) communications use channels that have a bandwidth of 500Mhz or more, with transmissions at a low power. UWB has existed for decades, but has recently become popularized as major players like Apple and Cisco invest in adding UWB chips to their newest devices. As the number of devices equipped with a UWB chip grows, it will enable a broad spectrum of capabilities. Over the years, researchers have developed an exciting variety of applications like estimating room occupancy, landslide detection, and human body position/motion tracking. Perhaps the leading use case for UWB technology has been precise indoor localization, with accuracies between 10-0.5cm. In this talk, I give a brief introduction to the technology behind the UWB, how it operates and discuss some of its promises as well as implications for our day-to-day activities.
Embedded IoT wireless devices can have many different requirements including short to long range, different power budgets and support of industry standard protocols. STMicroelectronics provides a wide range of innovative embedded wireless solutions able to support the wide range of real world requirements
- What is the best connectivity option for your IoT application: Wi-Fi, Bluetooth/Bluetooth Mesh, Thread, Zigbee, LoRaWAN, SIGFOX, LPWAN?
- Discover ST’s wide portfolio of single / dual core SoCs and transceivers for 2.4GHz and Sub1GHz embedded devices
- Learn which ultra-low power & high performance product best fits the need of your next generation industrial IoT application
- Discover ST’s innovative wireless product portfolio including the industry’s first Lora® and SIGFOX capable SoC and ST products capable of supporting multiple Bluetooth Low Energy concurrent connections
- Learn how the STM32Cube Ecosystem can simplify your next IoT device design
- Experience our technology through our video libraries and self-guided tutorial videos.
The industry is coming together under the Zigbee Alliance umbrella – through the Connected Home over IP project (Project CHIP). This is a big step for interoperability, and it will take the smart home to new levels with plans to expand those successes into commercial environments. This Working Group plans to develop and promote the adoption of a new, royalty-free connectivity standard to increase compatibility among smart home products, with security as a fundamental design tenet. The goal of CHIP is to simplify and unify environments with one technology. One certification. One logo. Connected Home over IP aims to simplify development for manufacturers and increase compatibility for consumers.
The Project CHIP effort has already attracted more than 145 active member companies of all sizes and across a range of business categories. We have 1,300+ experts involved working through 30+ cross-functional teams within the Alliance – so there's a lot of passion and experience that's being contributed to the spec's development and roadmap.
The initiative offers compelling value across the IoT landscape, including key commercial applications from hospitality to multi-dwelling units and offices. And, with a foundation in IP networking, the spec creates flexible connectivity options beyond the smart home.
We are on track to deliver a draft specification by late 2020 and continue to drive towards our goal of releasing the standard in 2021. Products will be available in the market shortly thereafter.
Cellular IoT connectivity holds tremendous potential for IoT applications. Low-power wide-area network (LPWA) devices are cost effective, power efficient, able to communicate across distances of up to tens of kilometers and do not require a constant network connection. They are ideal for applications where bandwidth is limited, and devices need to operate in the field for months or years at a time without maintenance (e.g., smart utility meters, underground sensors or other remote monitoring). Cellular IoT allows enterprises to deploy mass scales of devices cost effectively on existing network infrastructure.
Managing a massive IoT deployment can be complicated. Those new to IoT might assume that cellular coverage is available and stable everywhere, but that's not the case. Even in areas with consistent coverage, devices that consume too much bandwidth can be pushed off the network based on providers' fair usage policies. IoT projects often include a patchwork of hardware and software from various suppliers, creating vulnerabilities and difficulties in securing devices and data. While network operators can take steps to harden security in the cellular domain, the process is quite complex.