Learning at the Speed of Light

The past year has been a transformative time in the world of machine learning. A field has grown that weighs more than industrial applications and has begun to offer major advances that have changed technological processes and consumer products. But for the platform to continue to deliver significant gains in these areas and beyond, further progress is needed in the area of ​​tinyML. Traditional methods of using machine learning algorithms – small ones that rely on powerful computational resources in the cloud to run inferences – have been limited in their use due to problems with privacy, latency, and cost. TinyML offers the promise of eliminating these problems and opening new classes of problems to be solved by smart algorithms.

Also Read :  As yen tumbles, gadget-loving Japan goes for secondhand iPhones

Of course, running a machine learning model, with billions of units, is not easy when memory is measured in kilobytes. But with some imagination and a hybrid approach to use the power of the cloud and combine it with the advantages of smallML, it is possible. A team of researchers at MIT has shown how this can be done with theirs called Netcast relying on supercomputers to quickly retrieve model weights from memory, then instantly send them to the miniML machine via a fiber optic network. Once those weights are transferred, an optical device called a “Mach-Zehnder” broadband modulator is combined with the sensor data to generate lightning-speed calculations on the ground.

The team’s solution uses a cloud computer with a lot of memory to store the weights of a neural network full of RAM. Those weights are then transferred to the corresponding device as needed through an optical tube with enough bandwidth to transfer an entire movie in one millisecond. This is one of the main limiting factors that prevent small ML programs from implementing large models, but it is not the only reason. Processing power is at a premium on these devices, so the researchers proposed a solution to this problem in the form of a large shoebox that implements super-fast analog calculations by compression. in the input data the weights taken.

Also Read :  Twitter Blue signups no longer available after a wave of fake accounts

This program can perform trillions of updates per second on a hardware device like a desktop computer from the 1990s. In the process, machine learning ensures that privacy, reduced latency, and better energy efficiency. Netcast has been tested on image sharing and data visualization tasks with more than 50 miles separating the small ML device and cloud sources. After a small amount of calibration work, the average accuracy is found to be more than 98%. The results of this method are good for the use of commercial products.

Also Read :  Cyber Monday graphics card deals live: all the best discounts going right now

Before doing so, the company is working to improve their methods to achieve better performance. They also want to reduce the size of the shoe box to the size of a single box that can be integrated into other devices such as smartphones. With the further refinement of Netcast, there may be big things in the air for littleML.


Leave a Reply

Your email address will not be published.

Related Articles

Back to top button