Phison expands aiDAPTIV+ technology to enable AI processing on integrated GPUs, reducing DRAM needs and accelerating inference for wider AI adoption.
By enabling large AI models to run on standard consumer hardware, Phison’s aiDAPTIV+ technology removes a primary barrier to widespread AI adoption: the need for expensive, specialized GPUs. This could democratize AI development, allowing smaller companies and individual developers to build and fine-tune sophisticated models, accelerating innovation across the software ecosystem.
aiDAPTIV+ extends large-model AI capabilities to PCs with integrated GPUs
The technology uses NAND flash to expand system memory for AI tasks
DRAM requirements for a 120B parameter model are cut from 96GB to 32GB
DRAM requirements for a 120B parameter model are cut from 96GB to 32GB
Inference performance is accelerated by up to 10x on consumer-grade hardware
Sign in to save notes on signals.
Sign In