•   
  •   
  •   

Technology Intel unveils its first chips built for AI in the cloud

05:15  13 november  2019
05:15  13 november  2019 Source:   engadget.com

Alibaba unveils its own AI chip for cloud computing

  Alibaba unveils its own AI chip for cloud computing Apparently, Huawei isn't the only Chinese mega-corporation that was developing its own AI chip. Alibaba has unveiled an in-house-designed AI chip called the Hanguang 800 a month after Huawei launched the Ascend 910. The company, mostly known for its e-commerce business, said the chip could significantly cut down on the time needed to finish machine learning tasks. For example: Alibaba-owned shopping website Taobao takes an hour to categorize the one billion product images sellers upload on the platform. With the the new chip, that task would apparently be done in five minutes.

Intel is no stranger to AI -oriented chips , but now it ' s turning its attention to those chips that might be thousands of miles away. You'll have to be patient for the Movidius chip when it won't ship until sometime in the first half of 2020. This could nonetheless represent a big leap for AI performance, at

Intel has unveiled two new CPUs designed for large computing centers which will be the chipmaker's first to utilize artificial intelligence ( AI ). The two chips are the company's first offerings from its Nervana Neural Network Processor (NPP) line and one will be used to train AI systems while the

Intel is no stranger to AI-oriented chips, but now it's turning its attention to those chips that might be thousands of miles away. The tech firm has introduced two new Nervana Neural Network Processors, the NNP-T1000 (below) and NNP-I1000 (above), that are Intel's first ASICs designed explicitly for AI in the cloud. The NNT-T chip is meant for training AIs in a 'balanced' design that can scale from small computer clusters through to supercomputers, while the NNP-I model handles "intense" inference tasks.

a screen shot of a computer

The chipmaker also unveiled a next-gen Movidius Vision Processing Unit whose updated computer vision architecture promises over 10 times the inference performance while reportedly managing efficiency six times better than rivals. Those claims have yet to pan out in the real world, but it's safe to presume that anyone relying on Intel tech for visual AI work will want to give this a look.

AMD unveils its next-gen Threadripper CPUs with up to 32 cores

  AMD unveils its next-gen Threadripper CPUs with up to 32 cores AMD has unveiled its 3rd-generation Ryzen Threadripper CPUs built with its 7-nanometer "Zen 2" architecture, and the performance looks impressive. As before, there are 24- and 32-core variants, the TR 3960X and TR 3970X, respectively, with base clocks at 3.8GHz/3.7GHz respectively and a max boost speed of up to 4.5GHz. Both chips will run using AMD's all-new TRX40 chipset with 72 available PCIe 4.0 lanes and 12 USB-C 3.1 gen2 10Gbps SuperSpeed ports.These chips and motherboards are aimed at content creators, because the extra cores and higher PCIe 4.0 bandwidth won't help gamers much.

Qualcomm kicked off its annual AI Day conference in San Francisco with a bang. It took the wraps off of three new Google last year debuted Edge TPU, a purpose- built ASIC for inferencing, and Alibaba announced in December that it aimed to launch its first self-developed AI inference chip in the

Intel Nervana NNP- I is based on a 10 nanometer Ice Lake processor and is purpose- built specifically to accelerate deep learning deployment at scale. Naveen Rao, general manager of Intel ’s artificial intelligence products group, told the news wire, “In order to reach a future situation of ‘ AI everywhere’

You'll have to be patient for the Movidius chip when it won't ship until sometime in the first half of 2020. This could nonetheless represent a big leap for AI performance, at least among companies that aren't relying on rivals like NVIDIA. Intel warned that bleeding-edge uses of AI could require performance to double every 3.5 months -- that's not going to happen if companies simply rely on conventional CPUs. And when internet giants like Facebook and Baidu lean heavily on Intel for AI, you might see practical benefits like faster site loads or more advanced AI features.

Intel NNP-T chip for AI training in the cloud© Provided by Oath Inc. Intel NNP-T chip for AI training in the cloud Intel

Intel used a 17-inch foldable tablet to show off its 'Tiger Lake' platform .
Tiger Lake, Intel's next-generation Core mobile platform, is coming later this year. And to prove it, the company trotted out 17-inch foldable OLED tablet, codenamed Horseshoe Bend, as an example of what it's future chips can achieve. So far, Intel is saying Tiger Lake (below) will feature "double-digit performance gains," huge improvements for AI processing and Thunderbolt 4, which will be four times as fast as USB 3.0. It'll also be the firstThose specs are nice and all, but mainly we wanted to see more of that enormous Horseshoe Bend tablet concept. It's by far the largest foldable OLED we've seen -- when layed flat, it looks as big as a full-sized PC monitor.

—   Share news in the SOC. Networks

Topical videos:

usr: 3
This is interesting!