Advancing hardware-software codesign for AI-enabled systems

Sanhyun Hong and students

Assistant Professor Sanghyun Hong and his students Zachary Coalson and Anirudh Kanneganti discuss potential security and privacy risks of pushing machine learning models to the edge.

As the artificial intelligence revolution unfolds, we are witnessing transformations across a wide range of industries, powered by the ABCs of AI — algorithms, big data, and computing hardware. If there’s a D, it clearly stands for “disruption,” as the technology is poised to alter how we live and work in fundamental and dramatic ways.

But training and deploying AI models on large data sets takes substantial computational resources, which drives up the operational costs of AI systems. Researchers at Oregon State University are employing hardware-software codesign, which optimizes hardware and software simultaneously, enhancing performance while mitigating operational costs.

The AI degree program at Oregon State is hosted in the College of Engineering and is the first in the country to offer both master’s and doctoral degrees in AI as an interdisciplinary field. With a dedicated team of faculty researchers, the program encompasses AI model performance, energy efficiency, and security aspects, in both hardware and software contexts.

Lizhong Chen, associate professor of computer and electrical engineering and director of the System Technology and Application Research Laboratory, leads a research group that designs hardware accelerators for machine learning applications. The team's recent accomplishments include the development of a novel polymorphic accelerator that supports multiple neural network types with a single architecture, a deep learning accelerator that investigates cross-layer data reuse for efficient computing, a reliable data memory that withstands soft errors for safety-critical machine learning applications, and an intelligent wearable device incorporating ultralow-power machine learning inference to detect heart and brain diseases based on biological signals.

Graphic explaining the use AI to enhance and design computer systems.

STAR Lab: Problem Statement

  • Develop high-performance, energy-efficient, reliable, and secure computer systems, and explore their use in machine learning and natural language processing applications
  • Design better computing systems (e.g., GPUs, accelerators, HPCs, IoT devices) to accelerate AI workloads
  • Utilize AI approaches to help the designing and optimization of computing systems
  • Explore novel applications in machine learning and natural language processing enabled by efficient computing systems

Chen is also at the forefront of using AI to enhance and design computer systems. Chen’s team has made strides in harnessing AI’s power for machine learning accelerator design space exploration, identifying data communication patterns in GPUs, allocating resources dynamically in edge servers, and reducing peak power in data centers hosting numerous AI workloads. His book, "AI for Computer Architecture: Principles, Practice, and Prospects," encapsulates some of his design philosophy. This research direction could result in a transformative “self-evolving” system, in which AI accelerates computers, which then in turn accelerate AI.

Collaborating with Chen is Sanghyun Hong, assistant professor of computer science and Oregon State’s Secure-AI Systems Laboratory director. Hong's goal is to create efficient, trustworthy, and socially responsible AI-enabled systems. His recent work examines the design of efficient algorithms for executing computationally intensive deep neural networks on various hardware systems and assessing the security implications of implementing such efficient methods in real-world scenarios, including on edge devices — a project for which he was recently named a 2023 Google Research Scholar. Specifically, his research on slowdown attacks on input-adaptive neural network architectures earned a spotlight presentation at the International Conference on Learning Representations 2021, placing his paper among the top 3% of submissions.

Hong is also a trailblazer in investigating practical hardware-level attacks on deep neural networks. His research has demonstrated various attacks on the integrity, confidentiality, and availability of deep learning models. In recognition of his research contributions, he was invited as a speaker at USENIX Enigma 2021 to discuss "A Sound Mind in a Vulnerable Body: Practical Hardware Attacks on Deep Learning." Building on his research findings from attacking deep learning through hardware attacks, Hong is now exploring an exciting direction in achieving "Triple Wins":

  1. Security — Implementing secure computation mechanisms for deep learning models.
  2. Efficiency — Developing hardware-software co-design solutions for optimizing computational methods.
  3. Robustness — Ensuring computational savings of these methods remain resilient in adversarial settings.

As Chen and Hong advance their research, they will continue to innovate in terms of hardware performance, energy efficiency, and security. The closely intertwined relationship between hardware and software, particularly for emerging applications, will emphasize resource-intensive deployments, from small-scale instances, such as edge AI devices like smart appliances, to large-scale applications, such as conversational AI models capable of serving billions of users. These efforts present opportunities for collaboration with industry partners and workforce development, positioning Oregon State at the cutting edge of AI research.

If you're interested in connecting with the AI and Robotics Program for hiring and collaborative projects, please contact AI @ Oregon State (or Lizhong Chen at and Sanghyun Hong at

Subscribe to AI @ Oregon State

Return to AI @ Oregon State