Perceptron: 'Earables' that can detect facial movements and super-efficient AI processors - TechCrunch

Research into machine learning and artificial intelligence, now a key technology in almost every industry and company, is too voluminous for anyone to read. This column, Perceptronaims to collect some of the most relevant recent findings and papers – particularly in, but not limited to, artificial intelligence – and explain why they matter.

one “suitable for ears,” which uses sonar to read facial expressions, was among the projects that caught our attention in the past few weeks. So he did ProcTHOR, a framework from the Allen Institute for AI (AI2) that procedurally generates environments that can be used to train real-world robots. Among other highlights, Meta created an AI system which can predict the structure of a protein given a single amino acid sequence. And researchers at MIT have developed new ones hardware which they claim offers faster computation for AI with less energy.

The “ears,” which were developed by a team at Cornell, look like a pair of bulky headphones. The speakers send sound signals to the sides of the user’s face, while the microphone picks up the subtle echoes created by the nose, lips, eyes and other facial features. These “echo profiles” allow the headset to pick up movements like eyebrow raises and eye darts, which an AI algorithm translates into full facial expressions.

AI headset

Image Credits: Cornell

The ear has several limitations. It only lasts three hours on a battery and has to offload processing to a smartphone, and the echo-conversion AI algorithm has to train on 32 minutes of facial data before it can start recognizing expressions. But the researchers say it’s a much more elegant experience than the recorders traditionally used in animation for film, television and video games. For example, for the mystery game LA Noire, Rockstar Games built a rig with 32 cameras aimed at each actor’s face.

Maybe someday Cornell’s ear will be used to create animations for humanoid robots. But these robots will first have to learn how to navigate a room. Fortunately, AI2’s ProcTHOR takes a step (no pun intended) in that direction, creating thousands of custom scenes, including classrooms, libraries, and offices, in which simulated robots must perform tasks such as picking up objects and moving around furniture.

The idea behind the scenes, which have simulated lighting and contain a subset of a massive array of surface materials (eg wood, tiles, etc.) and household objects, is to expose the simulated robots to as much variety as possible. It is a well-established theory in AI that performance in simulated environments can improve the performance of real-world systems; autonomous car companies like Alphabet’s Waymo simulate entire neighborhoods to fine-tune how their cars behave in the real world.

ProcTHOR AI2

Image Credits: Allen Institute for Artificial Intelligence

As for ProcTHOR, AI2 claims in a paper that scaling the number of training environments consistently improves performance. This bodes well for robots designed for homes, workplaces and elsewhere.

Of course, training these types of systems requires a lot of computing power. But that may not be the case forever. MIT researchers say they have created an “analog” processor that can be used to create ultra-fast networks of “neurons” and “synapses” that can in turn be used to perform tasks such as image recognition , language translation, etc.

The researchers’ processor uses “proton programmable resistors” arranged in an array to “learn” skills. The increase and decrease in electrical conductivity of the resistors mimics the strengthening and weakening of synapses between neurons in the brain, part of the learning process.

Conductivity is controlled by an electrolyte that controls the movement of protons. As more protons are pushed into a channel in the resistor, the conductivity increases. When protons are removed, the conductivity decreases.

computer board

A processor on a computer board

An inorganic material, phosphosilicate glass, makes the MIT team’s processor extremely fast because it contains nanometer-sized pores whose surfaces provide the perfect pathways for protein diffusion. As an added advantage, the glass can be operated at room temperature and is not damaged by the proteins as they move through the pores.

“Once you have an analog processor, you’re no longer training networks that everyone else is working on,” lead author and MIT postdoc Murat Onen was quoted as saying in a press release. “You will train networks of unprecedented complexity that no one else can afford, and therefore vastly outperform them all. In other words, it’s not a faster car, it’s a spaceship.

Speaking of acceleration, machine learning is already being used control of particle accelerators, at least in experimental form. At Lawrence Berkeley National Laboratory, two teams showed that ML-based simulation of the full machine and beam gave them an extremely accurate prediction up to 10 times better than simple statistical analysis.

Image Credits: Thor Swift/Berkeley Lab

“If you can predict the properties of the beam with an accuracy that surpasses their fluctuations, then you can use the prediction to increase accelerator performance,” said the lab’s Daniele Filipeto. It’s no small feat to simulate all the physics and equipment, but surprisingly early efforts by various teams to do so have yielded promising results.

And at Oak Ridge National Laboratory, an AI-powered platform allows them to do hyperspectral computed tomography using neutron scattering, finding optimal… maybe we should just let them explain.

In the world of medicine, there is a new application of machine learning-based image analysis in the field of neuroscience, where researchers at University College London have trained a model to detecting early signs of brain lesions causing epilepsy.

MRI of brains used to train the UCL algorithm.

One common cause of drug-resistant epilepsy is what’s known as focal cortical dysplasia, an area of ​​the brain that has developed abnormally but for some reason doesn’t look obviously abnormal on an MRI. Its early detection can be hugely beneficial, so the UCL team trained an MRI inspection model called Multicentre Epilepsy Lesion Detection on thousands of examples of healthy and FCD-affected brain areas.

The model was able to detect two-thirds of the FCDs that were displayed, which is actually pretty good because the characters are very subtle. In fact, he found 178 cases where doctors failed to detect FCD but did. Of course the last word rests with the specialists, but a computer hinting that something is wrong can sometimes be all it takes to get a closer look and get a confident diagnosis.

“We focused on creating an AI algorithm that can be interpreted and can help doctors make decisions. Showing doctors how the MELD algorithm makes its predictions was an essential part of this process,” said UCL’s Mathilde Ripart.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *