Learning at the Speed of Light

The past decade has been a transformative time in the world of machine learning. A field that was once heavier on hype than on practical applications grew and started delivering major breakthroughs that revolutionized industrial processes and consumer products. But for the field to continue to deliver big wins in these areas and beyond, further progress will be needed in the area of ​​TinyML. Traditional methods of deploying machine learning algorithms—tiny computing devices that rely on powerful computational resources in the cloud to run inferences—are limited in their applicability due to issues with privacy, latency, and cost. TinyML offers the promise of eliminating these problems and opening up new classes of problems to be solved by artificially intelligent algorithms.

Of course running a modern machine learning model, with billions of parameters, isn’t exactly easy when memory is measured in kilobytes. But with some creative thinking and a hybrid approach that leverages the power of the cloud and blends it with the advantages of TinyML, it just might be possible. A team of researchers at MIT has shown how this can be possible with theirs method called netcast That relies on heavily resourced cloud computers to quickly retrieve model weights from memory, and transmit them almost instantly to the tiny hardware via a fiber optic network. Once the weights are transferred, an optical device called a broadband “mash-gender” modulator combines them with sensor data to perform lightning-fast calculations locally.

Also Read :  Artificial Intelligence In Fashion Retail Market Research (2022 – 2029): Key Players Methodology and Rapid Technology Growth Will Boost Industry Revenue: ALIBABA, STITCH FIX, Snap

The team’s solution uses a cloud computer with a large amount of memory to hold the weights of a full neural network in RAM. The weights are streamed to the connected device as needed through an optical tube with enough bandwidth to transfer an entire full feature-length movie in a single millisecond. This is one of the biggest limiting factors that prevents tiny devices from running large models, but it is not the only factor. Processing power is also at a premium on these devices, so the researchers also proposed a solution to this problem in the form of a shoebox-sized receiver that performs super-fast analog computations by encoding input data on the transmitted weights.

Also Read :  Hurricanes in the metaverse could save lives in reality – WSB-TV Channel 2

This scheme makes it possible to perform trillions of multiplications per second on a device as resourced as a desktop computer from the early 1990s. In this process, machine learning on the device that ensures privacy, minimized latency and is highly energy efficient is made possible. Netcast has been tested on image classification and digit recognition tasks with over 50 miles separating the ThinMl device and cloud resources. After only a small amount of calibration work, average accuracy rates exceeding 98% were observed. Results of this quality are good enough for use in commercial products.

Also Read :  How to insert page numbers in LibreOffice

Before that happens, the team is working to improve their methods to achieve even better performance. They also want to shrink the shoebox sized receiver down to the size of a single chip so that it can be incorporated into devices other than smartphones. With further refinement of Netcast, big things could be on the horizon for TinyML.


Leave a Reply

Your email address will not be published.

Related Articles

Back to top button