Emerging Innovations Across Computing and Communications

Broad-based research makes technology more capable, more intelligent, and more secure.

Highlights

  • Enablement for 5G networks and topologies makes them more capable, flexible, and cost-efficient

  • Research for autonomous driving is working toward taking humans out of the control loop

  • Enhanced blockchain will power trust in a growing number of electronic transactions

  • Sensemaking makes human spaces smarter using combined multi-sensor data

  • Anticipatory computing enables compelling experiences by understanding people, spaces, and situations

author-image

By

As a central aspect of its charter, Intel Labs continually engages in the development of next-generation technologies, across the spectrum from fundamental research to applied technology. In particular, this approach involves the development of collaborative research arrangements with academic and industry partners to bring nascent technical capabilities to fruition. This page highlights a few of the research areas that Intel Labs is currently engaged in.

Next-Generation Wireless: 5G/Beyond5G Networks in 2020

The 5G radio standard will emerge in 2020.

We have been on the 5G journey since 2012; significant progress has been made along the main theme of enhanced mobile broadband in throughput and network capacity, as well as verticals in addressing diverse requirements including high reliability and low latency.  The first run of 5G design and service has started to roll out in year 2020 to enable new types of usages such as virtual reality, cloud gaming, and enable massively scalable transmission for IoT.   We have started to establish our vision and plan for the life Beyond 5G,  aiming to address two types of challenges: a) facing a smart world and the shifting nature of data,  requires us to rethink communication design principals from transmitting data to harvesting value of data; and b) the ever-increasing network diversity and complexity, requires us to rethink system scalability and to transform our design approach.  In order to tackle these challenges, we need to take a systems view across Communications, Computing, and Intelligence, and develop fundamental innovations that lie in the intersection of these disciplines.

As part of its mission to enable the hardware, software, and carrier ecosystems for 5G & Beyond, Intel Labs has engaged with the university research community since 2012 on multiple fronts: a) through Intel Strategic Research Alliances on “5G: Transforming the Wireless User Experience,” and “Machine Learning for Wireless Networks & Systems,” and b) Intel Science & Technology Center on “Wireless Networking for Autonomous Systems”; providing both technical and financial support to this community of researchers. 

Intel Labs has focused its effort of 5G/Beyond5G enablement in three primary areas:

• Improving Comm KPIs and Uniform Connectivity: Novel approaches such as increased-scale MIMO, beamforming, and multi-radio convergence to support the projected 100-fold increase in throughput and capacity across access network and backbone network; reduce variability of Quality of Service/Experience over time and geographic location.

• Confluence of Compute + Comm + Intelligence: Establish fundamental capability towards scalable, secure and efficient networking using Intel’s hardware & software;  rethink design principals for distributed computing at the Edge; rethink compute & AI as key ingredient of network; rethink security, privacy and data integrity towards a massive scale distributed and concurrent computing and communication.  

• Wireless Property for Non-Comm: Leverage wireless properties and signal processing for non-communication usages or non-conventional communications; including wireless Chip-to-Chip connection, wireless Data Center, wireless Security and wireless for Sensing (including 4D radar) etc. 

Intel Labs also developed virtualized Evolved Packet Core (vEPC) reference software, which it made available to the industry as a whole by contributing it to the Open Networking Foundation’s CORD (Central Office Re-architected as a Datacenter) project. The code is optimized using the Data Plane Development Kit (DPDK) for Intel® Xeon® processors, providing virtualized mobile core capabilities for use by service providers supporting their subscribers.

The Advancement of Autonomous Driving Systems

The six levels of autonomous driving.

While substantial advances have been made in the technologies needed for autonomous driving, a great deal of research and development remains to be done. Vehicle manufacturers have made systems with increasing autonomy available to the public in the recent years. Although hurdles remain to widespread adoption, both to maximize safety and to build public trust.

Intel Labs is involved in research both to improve systems at the current level of conditional automation and to advance towards the goal of full automation, where onboard systems assume full control of the vehicle, which could operate autonomously with no human onboard.

Machine vision and AI-empowered sensing research play a fundamental role in the work being done by Intel Labs for autonomous vehicles, particularly in the context of recognizing potentially dangerous traffic situations and rare events. Examples of usage include robustly detecting other road users in cluttered or obstructed environments and avoiding dangerous situations, as well as more forward-looking abilities based on interpreting and spontaneously adapting to the environment, such as responding appropriately to unforeseen circumstances. This work includes optimizing solution stacks of hardware, firmware, and software for in-vehicle use, as well as providing development tools for the solution ecosystem.

In the area of road-experience management (REM) mapping, Intel Labs has been working on sensor-fusion and data-collection technologies for road-status data provided by sensors in on-road vehicles as well as infrastructure-centric sources. Synthesis of contextual information such as weather, incident reports, and construction alerts has the potential to dramatically improve safety and utility of intelligent transportation systems.

On the path toward full vehicle autonomy, Intel Labs takes an ecosystem-focused view that includes the development and implementation of open, technology agnostic standards that enable interoperability among solution vendors. 

A range of research collaborations are underway between Intel Labs and vehicle manufacturers, tier-1 automotive suppliers, sensor and electro-mechanical systems vendors, intelligent infrastructure service providers, research institutions and Government bodies. In an effort to open the ecosystem to members of all types and sizes, Intel Labs is also investigating frameworks such as open simulation environments that enable development of autonomous systems by those without capability to access field fleets of automated vehicles. 

Technology Enablement for Security

Intel® Software Guard Extensions (Intel® SGX)
Intel Labs is working on enablement around trusted execution environments within secure enclaves based on Intel® SGX to enhance privacy, security, and scalability of blockchain deployments. In particular, this set of capabilities keeps blockchain data in encrypted form until it is needed for a transaction and then decrypts it in a hardware-secured enclave where only permitted participants can view it.

Intel Labs is conducting a number of open-source development projects that extend the implementation of Intel® SGX for cloud security. That work includes the following projects:

Remote Attestation with Transport Layer Security (RA-TLS) integrates Intel® SGX remote attestation into the TLS connection setup to assess the trustworthiness of endpoints, extending the standard X.509 certificate with Intel® SGX-related information that enables the receiver of a certificate to verify that it is communicating with a secure enclave based on Intel® SGX. This work does not require changes to standard TLS implementations, and the project provides implementations for three common TLS libraries: OpenSSL, wolfSSL, and mbedTLS.

Graphene-SGX Secure Container (GSC) is a container system based on Docker instances that enables applications to be protected by Graphene-SGX while running in containers. In addition to the Docker instance where the application runs under Graphene-SGX, the project provides a front-end engine that can automatically launch legacy Docker container images inside GSC container instances.

• Intel® SGX Enabled Key Manager Service with OpenStack Barbican protects the OpenStack Barbican key management system, which secures secrets such as passwords, encryption keys, and X.509 certificates against system software attacks. This approach takes advantage of Intel® SGX to provide greater security than software-based plugins as well as greater scalability than hardware security module plugins.

• Snort® Intrusion Detection System (IDS) with Intel® SGX hardens Snort by running it inside a secure enclave along with a network layer optimized using DPDK to achieve line-rate throughput. This project aims to secure IDSs that are based on virtualized network functions running in public or private cloud environments while maintaining high throughput by operating network I/O outside the Intel® SGX enclave.

Blockchain
Enabling trusted transactions among untrusted parties has been a fundamental requirement on the Internet almost since its inception. Blockchain originated as a method of providing programmatic, decentralized trust between any two parties with the emergence of Bitcoin in 2009. This approach is designed explicitly to operate without any intermediary such as a bank or other authority, avoiding the added cost, delay, and complexity typically associated with such third parties. Inherent in this approach is the ability to scan and verify the transaction’s origin and provenance, as well as to cryptographically protect the transaction itself.

The same capabilities that make blockchain well suited to cryptocurrency also make it valuable in an enterprise permission context, enforcing ownership of content and data, including future usages such as smart contracts. As blockchains move out of their initial implementations for internal use within enterprises, blockchain-to-blockchain communications will become necessary, meaning that open standards for interoperability will be necessary, which Intel Labs is helping to develop in collaboration with academic and industry partners.

Intel Labs began work in 2014 on the project then code-named Sawtooth Lake, which it made available and continues to maintain as open source through the Linux* Foundation as the Hyperledger Sawtooth project. This enterprise blockchain platform simplifies building Ethereum-based distributed ledger applications and networks. By separating the core system from the application domain, Sawtooth allows developers to build business logic for smart contracts and other implementations using their platform of choice, without being concerned about the underlying design of the core system.

Intel Labs has also been instrumental in the development of Proof of Elapsed Time (PoET), a consensus algorithm incorporated into Sawtooth. This open-source standard offers a power-efficient alternative to the dominant standards Proof-of-Work (PoW) and Proof-of-Stake (PoS), which are integral to leading cryptocurrencies including Ethereum and Bitcoin. PoET takes the approach of using a timer algorithm based on secure instruction execution to replace the crytographic hashing puzzles used with PoW and PoS. The relative simplicity of the timer algorithm saves dramatically on the energy requirements to achieve consensus across large collections of nodes.

Subscribe to our SGX Academic Collaboration Mailing List

Sensemaking

From a human perspective, vision and hearing are each obviously useful on their own, but their combination provides value beyond the sum of its parts, in terms of making sense of the world. For example, it is common for people conversing in a crowded room to watch others’ lips, performing a kind of rudimentary lip reading as an aid to understanding what is being said.

This human example of multimodal sensemaking is analogous to work at Intel Labs combining multiple types of sensors that provide digital information to make human spaces smarter. This research stretches across a wide variety of contexts, including ambient computing, as well as smart homes, offices, factories, and retail. Many of those spaces have enough in common that sensemaking technologies and usage models can be generalized across them.

Computer Vision & Robotics
Within the sensemaking realm, the thrust of computer-vision research at Intel Labs has three main components:

• Making Agents More Intelligent includes self-learning capabilities that help smart agents discern what is important. For example, whereas today’s virtual assistants by and large require constant connectivity and offer the same functionality to everyone, future generations could learn the routines and privacy preferences of their owners over time, providing a personalized experience in either a connected or unconnected state.

• Synthesis Across Modalities combines data from multiple types of sensors to understand factors such as context, physical surroundings, activities, and emotions. For example, verbal and posture cues from a group of children can help infer whether they are playing a game, fighting, paying attention, or daydreaming.

• Spatial Understanding relates to a space and its structure, as well as how it changes over time. This could include factors as diverse as the presence of another vehicle in a roadway or the number of people present in a conference room.

Building open platforms that can holistically combine capabilities such as these while meeting requirements such as very low power and high performance where necessary is a core competency of Intel that enables Intel Labs research. This effort also benefits from Intel’s unique end-to-end systems expertise, from products in people’s houses to products that power data centers and clouds, as well as a software ecosystem that orchestrates activity across all those modalities.

Intel Labs is working to address the challenge of having sufficient memory and other resources available as needed, in the context of shifting requirements for granularity. For example, a robot moving across the floor needs a relatively rudimentary understanding of obstacles in its way, compared to the finer-grained task of picking up and manipulating a small, fragile object. Software development for these adaptive circumstances must address what reconstructions of space are needed, how to represent them, and how to support them with APIs and other programming elements.

Research teams are also developing capabilities that enable multiple robots to collaborate on complex tasks. A key aspect of this work is the development of algorithms for the autonomous operation of unmanned aerial vehicles (UAVs), including flight controls as well as planning and decision making for coordinated operation in dynamic real-world environments. Researchers have modified a version of the Intel® Aero Development Platform to operate autonomously, on the basis of which they are developing usages that show promise to be commercially viable in the real world.

For example, a group of UAVs could work together to efficiently perform inventory in large-scale warehouses or inspection of large capital equipment. The benefits of multiple UAVs in this context include both dividing the workload and providing multiple points of view, potentially with different types of sensors. Likewise, such a group could work together to enable autonomous farming, identifying conditions such as stress on plants from drought or pests and optimizing the use of chemicals.

Another usage is the creation of immersive consumer experiences by capturing synchronous video streams using multiple UAVs and stitching them together to create a 3D composite. The goal is that the resulting data stream would be transmitted to at-home viewers who could then view the 3D image from any angle, zoom in and out, and so on. There is some interest in using these capabilities in the Olympic games of the future.

Anticipatory Computing Lab
We are adding enormous numbers of new sensors to the world, and older ones such as cameras and microphones are gaining new importance. The Anticipatory Computing Lab is developing the algorithms to make sense of what all those sensors are producing. In many cases, this involves combining inputs across multiple types of sensors to develop an even richer and more robust understanding of those signals, to be more valuable in the real world.

The Anticipatory Computing Lab is working to enable compelling experiences through deep understanding of people, spaces, and situations via sensing and sense making. We leverage this understanding to take action in diverse ways: simplifying, connecting the dots, enhancing awareness, challenging, assisting, recommending, entertaining, and improving efficiency and workflow.

The Lab is currently focused on making everyday places smarter and more responsive to their human inhabitants. This means both dynamically understanding the evolving physical layout of a space, but also and understanding people’s actions and activities, what they are focused on, and even their emotions. Research on these problems is informed by psychology and social science research, with their attention to interpersonal dynamics, as well as the tools of interaction design.

This research reaches across a wide range of domains. For example, the Anticipatory Computing Lab is researching future interactions between a fully autonomous car and its passengers. Autonomous vehicles need to do more than just navigate a map and avoid collisions. They must also work collaboratively with their passengers to smoothly handle unexpected changes in passengers’ plans, or the need for a quick stop along the way. Research in the Lab has thus included exploratory design of an in-cabin agent, or assistant, which will do some of the interaction work that a human driver currently does for passengers. This involves both complex problems, such as developing a shared sense of the environment, and much more basic ones, such as discerning when a passenger is verbally addressing the in-cabin agent, as opposed to simply speaking to another passenger.

A related research project focuses on making our homes smarter. While there has been a tremendous amount of interest and activity in “Smart Home” technologies, few examples exist in the world that show what a genuinely smart home might do for its residents – not just in making daily life more convenient, but actually creating new value. Consider the example of early childhood learning. As researchers in the Anticipatory Computing Lab conducted extensive interviews and observations in households of all shapes and sizes, one consistent finding among parents of young children was that, while they see tremendous educational value in technology for their children, they also have anxieties about too much screen time. Researchers recognize this as an opportunity for ambient computing to deliver the benefits of digital technology while avoiding some of its perceived shortcomings, including physical inactivity, repetitive games, and sometimes questionable content.

Kid Space is an educational environment being developed on the basis of this research. It incorporates activity tracking and natural language understanding, combined with smart projection to create digital environments and characters that children interact with in ways that are both engaging and educational. The team has developed intelligent animated agents, for example, that children enjoy talking to, and which can be used to deliver lessons and even assistance with educational tasks. Other uses of smart projection can enable open-ended exploration and learning in the spirit of some of the most popular computer games. Imagine, for example, a bedroom floor transformed by smart projection into the sands of a remote fossil bed, enabling children playing with a few simple plastic beach shovels to dig up projected images of fossils – which may even come to life on the walls as living dinosaurs.

Such a scenario makes use of many sensory modalities: location and identity tracking enables a system to interact simultaneously with multiple children in the room; pose detection determines if is bent over digging in the virtual sand: audio and natural language understanding ensure that children are learning the intended lessons; a novel use of radio frequent identity (RFID) technology, attached to the shovels, provides a finer scale sense of motion even in situations where line-of-sight is unavailable to cameras; and finally, depth-sensing cameras map the room so projected characters interact appropriately with furniture and other objects and room features.

The point of developing these capabilities is not simply to create smart interactive experiences for young children. Many of these same technologies should apply across a variety of other environments. Smart rooms that understand their own composition, and the activities, intentions, and states of their occupants, will provide all kinds of benefits in the future, from performance support in manufacturing, to more efficient workplace collaborations to caring for the elderly.