How Will Deep Learning Change SoCs?
Junko Yoshida, EETimes
3/30/2015 00:00 AM EDT
MADISON, Wis. – Deep Learning is already changing the way computers see, hear and identify objects in the real world.
However, the bigger -- and perhaps more pertinent -- issues for the semiconductor industry are: Will “deep learning” ever migrate into smartphones, wearable devices, or the tiny computer vision SoCs used in highly automated cars? Has anybody come up with SoC architecture optimized for neural networks? If so, what does it look like?
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |
Related News
- How Will 5G Advanced Change RF Design?
- Harvard Researchers Select Flex Logix's Embedded FPGA Technology To Design Deep Learning SoCs
- Reading the tea leaves: How deep will EDA losses go?
- Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
- Syntiant's Deep Learning Computer Vision Models Deployed on Renesas RZ/V2L Microprocessor
Breaking News
- RISC-V International Promotes Andrea Gallo to CEO
- See the 2025 Best Edge AI Processor IP at the Embedded Vision Summit
- Andes Technology Showcases RISC-V AI Leadership at RISC-V Summit Europe 2025
- RISC-V Royalty-Driven Revenue to Exceed License Revenue by 2027
- Keysom Unveils Keysom Core Explorer V1.0
Most Popular
- RISC-V Royalty-Driven Revenue to Exceed License Revenue by 2027
- SiFive and Kinara Partner to Offer Bare Metal Access to RISC-V Vector Processors
- Imagination Announces E-Series: A New Era of On-Device AI and Graphics
- Siemens to accelerate customer time to market with advanced silicon IP through new Alphawave Semi partnership
- Cadence Unveils Millennium M2000 Supercomputer with NVIDIA Blackwell Systems to Transform AI-Driven Silicon, Systems and Drug Design