Making images with sound
Floating visual displays created using acoustically levitated particles could lead to galleries of singing heads, and advances in contactless manufacturing. Michael Allen investigates
Hovering 3D images are the stuff of science fiction. Just think of R2D2 projecting a message from Princess Leia in Star Wars, or perhaps Iron Man designing his latest tech in Marvel movies. Classics like Blade Runner and Back to the Future even had them in the background of shots as giant advertisements. But in a Faraday cage at the University of Bristol in the UK, fiction is becoming reality with the help of some speakers and polystyrene balls.
As I step into the windowless box that is part of the Ultrasonics and Non-destructive Testing Laboratory, it feels like I’ve walked into a shipping container. Research associate Tatsuki Fushimi explains that he works in here not because he needs to block electromagnetic fields – which is what a Faraday cage is normally used for – but because being inside the sealed box cuts down on drafts that might knock his polystyrene balls out of the air.
Fushimi then shows me what we have come to see: two horizontal grids of 30 miniature loudspeakers spaced around 20 cm apart and facing each other. The 10 mm-diameter speakers are the same as the ultrasonic transmitter and receivers used in car parking sensors. “You can buy them from – maybe not Amazon – but from normal electronic shops,” Fushimi says. After switching on his laptop, he picks up a tiny polystyrene ball, places it between the two transducer arrays and lets it go. And the ball just stays there, hovering, suspended in mid-air.
It starts with a tractor beam
Fushimi’s group at Bristol, and the Interact Lab at the University of Sussex in the UK, are at the forefront of the new field of “acoustic levitation” – essentially using sound to lift objects by counteracting the force of gravity with the pressure of acoustic waves. In 2015 a collaboration between the two groups unveiled a device known as a sonic tractor beam that could levitate objects and rotate and move them in multiple directions (Nature Comm. 6 8661). While similar types of levitation had been demonstrated before, previous attempts required particles to be surrounded by speakers in all, or at least most, directions. The difference with this new device was that it used just a single array of 64 loudspeakers, operating at 40 kHz.
Controlled by a programmable array of transducers, the grid of speakers produced acoustic shapes out of high-pressure ultrasound waves that could surround and trap objects in mid-air and be adjusted to rotate and move them. The researchers created three different acoustic shapes with the sonic tractor beam: tweezers, a vortex that trapped objects at its core, and a cage. In each case they demonstrated that they could control polystyrene particles ranging from around 0.5 to 3 mm in diameter.
Since then, the field has progressed with various innovations. For example, in 2018 a team at the Interact Lab combined a larger array of 256 loudspeakers with an acoustic metamaterial – an engineered material with structural properties that do not usually occur naturally – to bend a beam of sound around an obstacle, and levitate and manipulate an object on the other side. This device was named SoundBender (UIST ‘18 10.1145/3242587.3242590).
Then later in 2018, Bruce Drinkwater, professor of ultrasonics in the Bristol group, and Asier Marzo, who is now based at the Public University of Navarre in Spain, unveiled an acoustic levitation device that used two arrays of 256 loudspeakers to levitate and individually manipulate up to 25 polystyrene balls (diameter 1–3 mm) at the same time (PNAS 116 84). The ability to create and adjust multiple acoustic traps simultaneously opened new possible applications for sonic tractor beams.
The persistence of vision
An early idea the Bristol researchers had was to create mid-air, hologram-like visual displays by using multiple levitated and illuminated polystyrene beads like they were pixels. But they found this approach didn’t work as well as they had hoped. The graphics created were poor and coarse because the beads had to be at least a wavelength apart (about 1 cm). And the more particles used, the less power there was to manipulate them individually.
The team therefore came up with another idea: trace the image with a single acoustically levitated and illuminated ball travelling at high speeds. Essentially, if you move the illuminated particle fast and precisely enough you can create the illusion of the picture. It’s all thanks to persistence of vision – the capacity of the eye to briefly maintain an image on the retina after it has disappeared, enabling successive images that follow rapidly after each other to be perceived as one.
If you move the illuminated particle fast and precisely enough you can create the illusion of the picture, thanks to persistence of vision
Fushimi says that the levitating particle is essentially a way of displaying the light needed to create the 3D image. “For light to be seen at each point you need something for it to be reflecting off. By placing this particle at a point in space you create like a voxel [a 3D pixel] in space. The question of how you make those voxels appear in mid-air was solved by using acoustic levitation,” he explains.
Back in the Faraday cage, Fushimi fiddles with his computer – which is connected to the arrays via a set of control boards and amplifiers – and the levitated polystyrene ball starts to trace out a circle in mid-air, changing colour as it is illuminated by a nearby multicoloured light. As the bead speeds up while circling continuously on the same path, it becomes blurred, and an image of the circle a couple of centimetres in diameter just about persists.
The polystyrene ball is now moving at 5 Hz, which, Fushimi explains, means that it travels around the circle five times a second. To achieve the persistence-of-vision effect “the minimum frequency that we need to move these particles at is 10 Hz”, he adds. If the ball was completing the circle 10 times every second, in theory a multicoloured image of a circle would appear in mid-air.
Unveiled last summer, this “acoustophoretic volumetric display” dramatically increases the speed and accuracy with which a 0.7 mm polystyrene ball can be manipulated compared with previous acoustic levitators (Appl. Phys. Lett. 115 064101). The particle can be positioned with an accuracy of 0.11 mm in the horizontal axis and 0.03 mm in vertical axis, while moving at a speed of 60 cm/s. With this device, the researchers can accurately trace out images such as a 12 mm2 replica of the University of Bristol logo, as well as simple shapes like circles, figure-of-eights and squares. They are not yet, however, able to create them fast enough to achieve the persistence-of-vision effect. To see the images, you need to use a camera with a slow shutter speed.
When the polystyrene ball in the Faraday cage hits a frequency of 10 Hz it does produce a circle that can be viewed as an image, but it is not a smooth circle – it has a wavy, squiggly outline (figure 1). But it does demonstrate proof-of-concept.
Bigger is better
A few months after the Bristol team’s work, researchers at Sussex’s Interact Lab, led by Sriram Subramanian, also unveiled an ultrasound-powered, 3D visual display (Nature 575 320). Their much larger device can produce a persistence-of-vision effect.
Armed with two arrays of 256 transducers, they demonstrated that they can move a polystyrene bead at speeds of almost 9 m/s. This allows them to draw 2 cm images of torus knots, smiley faces and letters in less than 0.1 s, which is fast enough for them to be visible to the naked eye. They can also create more dynamic content, such as a number countdown, and still achieve the persistence-of-vision effect. More complex images like 3D globes and the University of Sussex logo can’t be viewed with the naked eye and instead require long camera exposures to be seen properly.
Drinkwater says that the performance of the Sussex device is impressive. “The accelerations and speeds are so good, whereas the hardware is essentially the same [as ours], the only difference is it is bigger. The natural thing to think is that bigger might slow it down, but it doesn’t,” he adds.
He believes that the reason the larger device achieves higher speeds is because the particle is positioned further from the speaker arrays. This means, he explains, that the changes in the phase of the ultrasound required to shift the acoustic traps and the particle can be smaller. And if the phase changes are smaller you hit the limits of the loudspeakers at higher particle speeds. Fushimi says it is much like drawing an image on a wall with a laser pen. “If you are very close to the wall you have to move a lot to move the laser point from one end of the wall to the other, but if you are further away you only have to flick your arm a tiny bit.”
Subramanian says that this could be the case, as the distance between the acoustic traps in their setup is quite small. He adds, however, that the “devil is in the detail” and that he can’t speculate too much until he knows “exactly what [the University of Bristol team] tried and didn’t try, and how exactly they tried it”.
The Sussex researchers are now looking at how they can improve their hardware setup to build large and more complex displays, with faster moving particles. Currently, their acoustic levitation device is centrally controlled, with all the decisions being made on a computer before being sent to the control boards and loudspeakers. “We push the data to a USB port and on to these transducers,” Subramanian explains, “and quickly you start hitting the limits of how much data you can push through one USB port as you start increasing the number of transducers. You need to send amplitude and phase information for each transducer individually at very high speed.”
He sees solutions in making the system wireless – to enable faster data transfer – or less centrally controlled, with more computerization at the control boards and transducers. “So, you don’t send all the information from the PC, but you send some high-level information and each transducer board does the calculation locally,” Subramanian explains.
Subramanian hopes that introducing some intelligence in the transducers will also enable them to produce displays with multiple levitated particles acting as pixels, to create more complex and detailed images. “Our ambition over the next couple of years is to be able to have a talking head that is about the size of a normal human head,” he says (see box below).
Understanding the trap
Back at the University of Bristol, researchers are now working to develop more accurate models of the dynamics and shapes of the traps in the acoustic levitator. They hope that this will enable them to work out the limits of acoustic levitation and then explore its applications. “Levitating objects and moving them around is something that is useful for things other than acoustophoretic displays, such as holding samples, continuous production lines and a manner of other manufacturing concepts,” Drinkwater says. Manipulating pharmaceutical products, where contamination is an issue, would be a good example, he adds.
Drinkwater says that they have a simple model of the traps, which is not quite right but not far off. He explains that the system is like a dynamic trap. When the particle is in a node in the acoustic field, which ever direction it moves in, the force gets stronger, making the system stable and holding the particle in place. The particle moves with the acoustic trap, but as you move it faster the trap starts to resonate, and due to a lack of dampening in the system, it becomes unstable. And this is why the circle I saw in Fushimi’s lab loses its smoothness as its frequency is pushed to 10 Hz – the particle vibrates too much (figure 1).
Tom Hill, an expert in nonlinear dynamics at Bristol, says that you can imagine the polystyrene particle as a ball in a bowl. “If you have a ball in a bowl and you are trying to move the ball in a particular path, it is easy if you do it slowly, but as soon as you do it fast it starts sloshing around the bowl,” he explains. “It’s like we’ve got a bowl that is a really complicated shape and actually getting a model of what that shape is, is very, very difficult. Plus, as you move the bowl around it is changing shape and there are other complexities.”
However, the Bristol researchers believe they are nowhere near the inertia limit yet – the amount of force they can apply and the speed at which they can move the particle. Drinkwater says that part of the problem is that the loudspeakers they are using aren’t up to it and that is a big area for potential future research. He says that “the bigger display seems to be one way of sort of getting around that problem” – but adds that you will still eventually hit a limit and go back to “the wobbling around, the bowl problem”.
The aim, Hill says, is to understand the shape of the bowl and how it changes in space. Then you could model exactly how you need to move the bowl to move the particle along the desired path. “The interesting thing with that is it would give you a maximum speed – a fundamental limit to how fast these [levitated particles] can go,” Hill adds.
Drinkwater likens his acoustic traps to an invisible robot arm – they grab things and move them around. And, just like robot arms in factories, his traps could be used to manufacture things, as well as creating images. If he can improve his set-up so that lots of polystyrene balls can be packed into a small space and move quickly and accurately, who knows, perhaps we’ll start seeing the 3D levitating projections of science fiction in real life.