A bright idea in neural networks

Subject: General Tech | August 2, 2018 - 12:32 PM |
Tagged: neural net, 3d printing

Researchers at UCLA have printed out their own type of neural network, one which utilizes light instead of electrons to process inputs.  The D2NN would be trained in feature recognition and analysis as well as classification and would do so at a much faster rate than a conventional neural net.  Another benefit is that once you have printed plates with the correct patterns to successfully classify objects, the only power required for this is the light source being processed, no internal electricity required.  This passive design implies that teaching the network may require printing new sheets, though the link at Slashdot was not completely clear about that detail ... much like the sheets themselves.

diffractive-deep-neural-network-uses-light-5.jpg

"Matt Kennedy from New Atlas reports of an all-optical Diffractive Deep Neural Network (D2NN) architecture that uses light diffracted through numerous plates instead of electrons. It was developed by Dr. Aydogan Ozcan and his team of researchers at the Chancellor's Professor of electrical and computer engineering at UCLA."

Here is some more Tech News from around the web:

Tech Talk

 

Source: Slashdot

Eight is enough, looking at how the new Telsa HPC cards from NVIDIA will work

Subject: General Tech | September 14, 2016 - 01:06 PM |
Tagged: pascal, tesla, p40, p4, nvidia, neural net, m40, M4, HPC

The Register have package a nice explanation of the basics of how neural nets work in their quick look at NVIDIA's new Pascal based HPC cards, the P4 and P40.  The tired joke about Zilog or Dick Van Patten stems from the research which has shown that 8-bit precision is most effective when feeding data into a neural net.  Using 16 or 32-bit values slows the processing down significantly while adding little precision to the results produced.  NVIDIA is also perfecting a hybrid mode, where you can opt for a less precise answer produced by your local, presumably limited, hardware or you can upload the data to the cloud for the full treatment.  This is great for those with security concerns or when a quicker answer is more valuable than a more accurate one.

As for the hardware, NVIDIA claims the optimizations on the P40 will make it "40 times more efficient" than an Intel Xeon E5 CPU and it will also provide slightly more throughput than the currently available Titan X.  You can expect to see these arrive in the market sometime over then next two months.

newtesla.PNG

"Nvidia has designed a couple of new Tesla processors for AI applications – the P4 and the P40 – and is talking up their 8-bit math performance. The 16nm FinFET GPUs use Nv's Pascal architecture and follow on from the P100 launched in June. The P4 fits on a half-height, half-length PCIe card for scale-out servers, while the beefier P40 has its eyes set on scale-up boxes."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

NVIDIA's new Tesla M40 series

Subject: General Tech | November 11, 2015 - 06:12 PM |
Tagged: nvidia, Tesla M40, neural net, JetsonTX1

There are a lot of colloquialisms tossed about such as AI research and machine learning which refer to the work being done designing neural nets by feeding in huge amounts of data to an architecture capable of forming and weighting connections in an attempt to create a system capable of processing that input in a meaningful way.  You might be familiar with some of the more famous experiments such as Google's Deep Dream and Wolfram's Language
Image Identification Project
.  As you might expect this takes a huge amount of computational power and NVIDA has just announced the Tesla M40 accelerator card for training deep neural nets.  It is fairly low powered at 50-75W of draw and NVIDIA claims it will be able to deal with five times more simultaneous video streams than previous products.  Along with this comes Hyperscale Suite software, specifically designed to work on the new hardware which Jen-Hsun Huang comments on over at The Inquirer.  

At the end of the presentation he also mentioned the tiny Jetson TX1 SoC.  It has 256-core Maxwell GPU capable of 1TFLOPS, a 64-bit ARM A57 CPU, 4GB of memory and communicates via Ethernet or Wi-Fi all on a card 50x87mm (2x3.4)" in size.  It will be available at $300 when released some time early next year.

hyperscale-datacentre-nvidia-540x334.jpeg

"Machine learning is the grand computational challenge of our generation. We created the Tesla hyperscale accelerator line to give machine learning a 10X boost. The time and cost savings to data centres will be significant."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer