13.6 C
Madrid
Thursday, December 5, 2024

Silent Spies: Deep Learning Model Stealthily Steals Data by Eavesdropping on Keyboard Sounds

The technique doesn't necessarily require direct access to the victim's device microphone which is a matter of concern

Must read

Russell Chattaraj
Russell Chattaraj
Mechanical engineering graduate, writes about science, technology and sports, teaching physics and mathematics, also played cricket professionally and passionate about bodybuilding.

UNITED KINGDOM: British researchers have successfully trained a deep learning model capable of stealing sensitive information by deciphering keyboard keystrokes through sound recognition. 

The model, which a collaborative team from various British universities developed, has sparked worries about the possibility of novel cyber threats using audio-based attacks.

- Advertisement -

The researchers’ breakthrough centers around an algorithm that can effectively “listen” to the sound of keystrokes on a keyboard and translate them into the text being typed. By analyzing the unique acoustic patterns generated by each keystroke, the algorithm achieves a remarkable accuracy rate of 95% in predicting the typed content.

The initial training data was sourced from a MacBook Pro keyboard, where each of the 36 keys was pressed repeatedly, with the resulting sounds recorded using an iPhone 13 mini placed approximately 17 centimeters away. The recorded sounds were then converted into waveforms and spectrograms, effectively creating a distinct acoustic signature for each key press. This data was subsequently employed to train a deep learning model known as “CoAtNet,” which could identify the specific keys pressed based solely on the audio cues.

- Advertisement -

What’s most concerning is that the technique doesn’t necessarily require direct access to the victim’s device microphone. Threat actors could potentially exploit online communication platforms like Zoom or Skype by participating in virtual meetings and using their own microphones to capture and analyze keystrokes from other participants. 

This method opens up new avenues for cybercriminals to steal sensitive information, including usernames, passwords, and personal messages.

- Advertisement -

Users concerned about this emerging threat are advised to modify their typing patterns, utilize complex and randomized passwords, or even introduce background white noise to obfuscate their keystroke sounds. Employing software that mimics keystroke sounds could also potentially thwart the model’s accuracy.

However, it’s important to note that altering keyboard mechanisms may not offer a foolproof solution. The researchers found that their model excelled in accurately deciphering keystrokes from the typically silent keyboards found in recent Apple laptops. 

Switching to different keyboard mechanisms, such as those with silent switches or membrane-based keyboards, might not guarantee protection against this type of attack.

One of the most effective countermeasures against these sound-based attacks appears to be the adoption of biometric authentication methods. These include technologies like fingerprint scanners, facial recognition, and iris scanners, which introduce an additional layer of security by relying on unique physiological features.

The emergence of this sound-based attack vector underscores the dynamic nature of cybersecurity threats. As technology evolves, so too must our strategies for safeguarding sensitive information and personal privacy. Researchers, developers, and cybersecurity experts are challenged with staying one step ahead of malicious actors who are constantly seeking innovative ways to exploit vulnerabilities.

Also Read: Ferrari Investigates Cybersecurity Event after Ransom Demand, Customer Details Leaked

Author

  • Russell Chattaraj

    Mechanical engineering graduate, writes about science, technology and sports, teaching physics and mathematics, also played cricket professionally and passionate about bodybuilding.

- Advertisement -

Archives

spot_img

Trending Today