Phison unveiled at CES 2026 an expansion of its aiDAPTIV+ technology that uses high-capacity NAND flash to extend GPU memory, enabling larger AI models to run on everyday PCs.
The demonstrations spanned laptops, desktops, and compact mini‑PCs, with Phison arguing the approach reduces the DRAM required to support large language models.
By moving portions of a model’s working memory from DRAM to flash, aiDAPTIV+ aims to lower hardware costs and let devices with integrated GPUs handle models that would typically demand far more VRAM.
Phison said a 120‑billion‑parameter setup could run on just 32 GB of DRAM, compared with the roughly 96 GB traditionally required, highlighting potential improvements in user experience on affordable hardware.
In early results, storing tokens in flash and avoiding re‑computation during inference reportedly sped up response times by up to tenfold and reduced energy use in notebooks; independent benchmarks remain necessary to confirm these gains across real‑world workloads.
Acer participated in the CES demonstrations, noting a 120B‑class model variant on an Acer notebook configured with 32 GB of memory. Other collaborators include Corsair, MSI, ASUS and Emdoor, showcasing various form factors for on‑device AI tasks like agentic interfaces and note‑taking summaries.
Phison framed aiDAPTIV+ as a path to democratize AI work that typically requires expensive workstations or cloud servers. The company cautioned that many products remain under development and timelines for availability could change; independent benchmarking and field deployments will be needed to validate performance and latency parity with production AI workloads.