There is no need to talk much about the countless benefits that artificial intelligence (AI) brings in all areas of modern life. But that’s when this technology is used for good. On the contrary, if abused to perform bad behaviors, the harm that AI causes is immeasurable.
Computer science experts at Cornell University (USA) recently discovered a “new trick” for AI tools to steal your data: Keystrokes. The team has detailed an AI-driven attack that can steal passwords with up to 95% accuracy just by listening to what you type on the keyboard.
Accordingly, the researchers performed an experiment with training an AI model that analyzes the sound of keystrokes and deployed it on a placed phone. The phone’s built-in microphone listened to the keystrokes on the MacBook Pro and was able to reproduce them with 95% accuracy — the highest accuracy researchers have ever recorded, without using language models big.
The team also tested the accuracy of the model via a Zoom call, in which keystrokes were recorded using the laptop’s microphone during the call. In this test, the AI reproduced keystrokes with 93% accuracy. Same for Skype, the accuracy is 91.7%.
Do not be in a hurry to doubt the noisy mechanical keyboard you are using every day. It is worth noting that the volume of the keyboard has little to do with the accuracy of the attack. Instead, the AI model is trained on the waveform, magnitude, and duration of each keystroke for recognition. For example, you may press a key a fraction of a second slower than others due to your typing habits, and this will be taken into account by the AI model.
In effect, this attack would take the form of malware installed on your phone or another nearby device with a microphone. It then collects data from your keystrokes by listening silently using the infected device’s microphone, and sends this data to a research AI model. The researchers used CoAtNet, an AI image classifier, for the attack and trained the model on 36 keystrokes on a MacBook Pro, 25 keystrokes each.
This type of attack is quite dangerous, no matter how much the victim changes the keyboard, it will not work. Even the best keyboards can fall victim to this AI model’s sophisticated attack method. However, the way to limit it is not too difficult. For example, you should avoid entering passwords and take advantage of features like Windows Hello and Touch ID. You can also invest in a good password manager that not only avoids the risk of entering passwords, but also allows you to use random passwords for all of your accounts.