RISC-V Summit is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa

How RISC-V Enables Our Connected Future

Join us for the first RISC-V Summit, Dec 3 - 6 at the Santa Clara Convention Center and learn from how the open, standard collaboration on the RISC-V ISA is leading to a more connected and secure world.

What's On? Our IoT and Connected Technology Sessions Include:

  • Unleashing Innovation from the Core to the Edge, Martin Fink, Western Digital
  • If We Get RISC-V Security Right, It will Dominate the Processor in the $470B IoT Market, Jothy Rosenberg, Dover Microsystems
  • Extending the RISC-V ISA for Optimized Support of Convolutional Neural Networks in a Multi-Core Context, Eric Flamand, GreenWaves Technologies
  • Explore how to Integrate RISC-V to Build An Open Common Automotive Platform, Tiejun Chen, VMware
  • AI at the Edge Using PULP + eFPGA, Timothy Saxe, QuickLogic & Luca Benini, ETH Zurich

MEET THE RISC-V SUMMIT SPEAKERS WHO WILL PRESENT ON IOT AND CONNECTED TECHNOLOGY

Unleashing Innovation from Core to Edge

Big Data and Fast Data applications are transforming enterprise environments involving core activities on-premises and on hyper-scale cloud infrastructure, as well as those that occur at the network’s edge, with new hubs or “data depots” emerging to address the locality and speed of access to data. Regional, local and remote data centers, and/or points of data aggregation, now provide opportunities to transform and add value to data as it flows from IoT and other edge applications, into the core of the network where it can be processed and analyzed to deliver actionable insights and value. Each of these data depots will require unique compute architectures and advanced data processing requirements paving the way for RISC-V -- an open instruction set architecture (ISA) designed to meet the diverse application needs of Big Data and Fast Data in this data-centric world. In this keynote, Western Digital CTO Martin Fink will discuss the value of purpose-built compute architectures and how RISC-V will enable a diversity of Big Data and Fast Data applications and workloads at each point along the spectrum, from edge to core.

Tuesday, 4 December 2018 9:20am - 9:40am 

If We Get RISC-V Security Right, It Will Become the Dominate Processor in the $470B IoT Market

The IoT "cyber epidemic" is an existential threat to civilized society. We are dangerously vulnerable to this threat because bugs in software let attackers in, and defenseless processors do their bidding. This must be addressed in hardware at the processor level. With its low barriers to entry and no legacy requirements to support, we have a unique opportunity with RISC-V to truly fix this problem. We can protect a RISC-V core from network-based attacks --without changing it -- using three key innovations. 

  1. First, generate metadata about the intent of the application to provide a co-processor with information unavailable in today's standard development environments. 
  2. Second, apply a set of rules called micro-policies to describe the security properties we want to maintain and enforce. 
  3. And third, create simple but powerful hardware mechanism that watches every instruction, examines the critical metadata, and evaluates the aforementioned rules to block any instruction doing the wrong thing. 

The RISC-V community is ideally suited to wield this revolutionary technology to create processors that can dominate the burgeoning IoT market, while making our connected world a safer and more secure place.

Extending the RISC-V ISA for Optimized Support of CNNs in a Multi-Core Context

Extensibility is in integral part of the RISC-V Instruction Set Architecture (ISA). The decision to extend the ISA in a particular way is mostly influenced by few highly structuring hypotheses rooted in the application domain(s) for which a performance boost is required. The motivation for such a boost can be faster execution but it can also at same time be directed towards reducing power consumption. 

One of the challenges posed by extensions is how to preserve balanced characteristics of the micro architecture used: gate cost, critical paths, ... GAP8 leverages the PULP open source initiative which is itself using the RISC-V ISA for its processing elements. Both PULP and GAP8 are heavily using RISC-V extensions. 

In this session, we will show how these extensions are bringing a significant performance/energy boost compared to the base RISC-V ISA for Deep/Convolutional Neural Network (DNN/CNN) applications by combining Digital Signal Processing (DSP) related extensions with advanced Single Instruction Multiple Data (SIMD) capability. We will go step by step through the optimization process that led to the ISA extension definition. We will then show how such an extended core can be used efficiently in a multiple core shared memory model still using CNN/DNN applications as a driving use case. We will illustrate the capability of the GAP8 multiple core SoC on some real-life medium complexity CNNs. 


RISC-V Accelerators
Tuesday, 4 December 2018 4:55pm - 5:25pm 

Explore how to Integrate RISC-V to Build An Open Common Automotive Platform

In recent years, we can see vehicles in evolution. The rapid explosion in the today's automobiles with addressing the complex E/E systems are designed with more than 70 ECU, even 100. In the future, the automotive industry will continue developing ADAS to either enable connected vehicles or self-driving car with many new hardware components and software components. 

The automotive industry needs to virtualize in-vehicle systems and AMP at this point in its development. Vehicles have become increasingly dependent upon more power microprocessors and systems over the years in order to provide for the use of advanced technologies for driving complex head-units, hundreds of ECM's, electronic dashboards, diagnostics, telematics, autonomous driving, infotainment systems, electric charging, and V2X scenarios. 

Time to market is critical and technology must be used to reduce the product design cycle, not increase it. Here we are trying to bring out hardware-assisted virtualization solution and AMP framework to accelerate this evolution with addressing some significant challenges: 

  • Software Defined Modern Car
  • Flexibility, interoperability and compatibility
  • Costs 
  • Software and hardware consolidations 
  • Security -- Isolation for different domain 
  • Mixed criticality -- ASIL 
  • Decoupled license issue specific to App 
  • Supporting easily different Apps align with Linux, Android, commercial RTOS, etc. 
  • Automotive Edge Computing 
  • Put cloud management down to edge inside car. 

So, with this session, our mission is to explore how to design a next generation automotive platform-based on RISC-V with the enterprise virtualization that guarantees secure isolation, maintenance, interoperability, flexibility while ensuring high performance, open standards and intelligence. We probably can make a complex SOC with RISC-V to provide a single multiple core platform where we'd like to explore if we can integrate heterogeneous architectures like other CPU arches, GPU, FPGA, etc to build one common open source automotive platform.

Open RISC-V PlatformsTuesday, 4 December 2018 4:30pm - 5:00pm 

AI at the Edge Using PULP + eFPGA,

PULP is a silicon-proven open-source parallel platform for ultra-low power computing created by researchers at ETHZ and UNIBO with the objective of delivering high compute bandwidth combined with high energy efficiency. The platform is organized in clusters of RISC-V cores that share a common and tightly-coupled data memory subsystem. The platform also includes a set of System Verilog-described IP blocks, their related synthesis and simulation scripts, and the runtime software (written in C and RISC-V assembly) necessary to provide a complete system. All of the architecture, IP, scripts and software are open sourced to encourage global collaboration and development. 

Integrating eFPGA technology with the PULP Platform enables users to offload critical functions from the processor(s) and implement them in eFPGA fabric. This approach enables the creation of multiple hardware co-processors that increase system efficiency and performance while decreasing power consumption. An example use case for the eFPGA technology is to enable hardware acceleration of feature extraction for AI applications. In this case, using eFPGA fabric significantly improves performance and lowers power consumption by offloading those functions from the RISC-V processor while still maintaining the ability to adopt and implement new algorithms even after field development. Our goal is to implement AI at the edge in a single platform with reconfigurable acceleration capabilities; for this purpose, ETH and QuickLogic are developing a test chip in 22FDX technology which will showcase the benefits of having AI feature extraction implemented in eFPGA fabric to achieve higher performance with the lowest possible power consumption.

RISC-V Accelerators
Tuesday, 4 December 2018 4:30pm - 4:50pm