C-Transformer chip: this ultra-low power consumption semiconductor might bring local AI to mobile phones

And it might bring local AI to these devices.

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

C-Transformer chip

Korea Advanced Institute of Science and Technology (KAIST) has announced the development of a Complementary-Transformer AI chip (or C-Transformer chip), that is capable of processing LLMs (large language models) on ultra-low power consumption and it can be integrated into devices such as mobile phones.

According to a press release for The Korea Bizwire, the C-Transformer chip is capable of processing GPT-2 with a power consumption of only 400 milliwatts, and a high speed of 0.4 seconds.

The KAIST says the chip has a square form of 4.5 mm and was developed using Samsung tech; it has about 625 times less power consumption compared to Nvidia’s A-100 GPU – which needs approximately 250 watts of power to process GPT-2.

The C-Transformer chip is also 41 times smaller than the A-100 GPU, which makes it perfect for mobile phones.

KAIST, in an interview for The Korea Times, claims that the chip is also capable of neuromorphic computing, a sort of computing that mimics the structure and functions of the human brain, claiming that it is the first in the world to integrate such technology in a piece of hardware:

Neuromorphic computing is a technology that even companies like IBM and Intel have not been able to implement, and we are proud to be the first in the world to run the LLM with a low-power neuromorphic accelerator.

KAIST

Neuromorphic computing might be the key to unlocking the technology required for local AIs to properly run on small devices, such as tablets or mobile phones, without the need for an Internet connection. This would be preferable to the cloud-based AI services because it wouldn’t consume so much power and it would allow for a more sustainable way to approach and implement AI in the current technology.

It’s worth mentioning that not only KAIST is thinking this way: recently, Dell, another tech giant, stated that the future of AI is on-premises, and users will most likely want their AI products and services to run locally, rather than on-cloud.

This is also convenient: the new C-Transformer chip can be, theoretically speaking, integrated into mobile phones, so we might end up actually with phones that run local LLMs.

More about the topics: AI, nvidia