The emergence of artificial intelligence-based applications is increasing traffic in data centers. Data rates higher than 200 gigabits per second are needed to handle ever-increasing traffic for data-heavy applications like ChatGPT. Historically, the industry has pushed data rates higher by increasing the order of modulation and baud rate. However, data rates have recently hit the “bandwidth wall,” because these two known knobs for increasing data rates are approaching their limits. Increasing data rates today by even a few Gb/s comes at a huge price, as I see reduced energy efficiency and data transmission accuracy (measured in bit error rate).
At Oregon State University, my team of graduate student researchers and I have proposed a two-legged approach to solving these wireline communication system challenges. For the first leg, my former graduate student Mohamed Megahed, Ph.D. electrical and computer engineering ’22, and I found that when data is modulated in time domain and sent over a lossy wireline channel, it gets converted into amplitude domain modulation with an altered time shift related to the original signal's amplitude level. After realizing this behavior could be leveraged to improve signal-to-noise ratio while increasing data transmission rates without sacrificing accuracy, we named this discovery SNRE Modulation (Signal-to-Noise Ratio Enhanced Modulation).
In order to validate our findings, Megahed quickly designed a SNRE-8 wireline transceiver in 65nm CMOS and compared its performance against conventional pulse amplitude modulation (PAM-8). When operated up to 27Gb/s, the SNRE-8 achieved a staggering 10dB improvement in signal to noise ratio over PAM-8.
For the second leg, I hypothesized that if we encode the data on the transmitter so that it receives attributes such as no consecutive identical digits, then the receiver can extract basic features from the received data and correctly classify the received data bits. To further verify this hypothesis, my former graduate student Yusang Chun, Ph.D. electrical and computer engineering ’20, used diode encoding at the transmitter and a simple decision tree-based classifier at the receiver to demonstrate the first equalizer and ADC free wireline communication link operating up to 16Gb/s (in 65nm CMOS). Employing principles of machine learning, this idea was so promising that Frank O'Mahony, Intel Fellow and director of I/O design enablement, enabled grant funding for more research into this concept. This allowed a team of current and former students from my lab to use hybrid ternary encoding for successful communication on wireline channels. As a result, my research group has been invited to join the Defense Advanced Research Projects Agency’s JUMP 2.0 program to investigate new modulation, encoding, and classifier architectures for efficient communications over wireline channels.
To explore collaborative opportunities, or the semiconductor program in general, get in touch with us at: semi-osu@oregonstate.edu.