With AI recently in the spotlight in Europe over the need to regulate over some ‘unacceptable use’, some experts are warning of the threat of AI keystroke reading spy tools.
Companies like TypingDNA developing AI biometric verification (back in 2017) based on recognising the individual characteristics of how a person types, suggest that it is possible that similar programs from other sources could be used for malicious intent as well as good.
The type of keystroke recognition used in the TypingDNA system (which is safe and secure and has not been used for nefarious purposes) uses timings and durations of key-press events and compares these against the normal typing pattern that each new enrolling customer gives a sample of when they sign up to the app. The same company has also managed to create a system called ‘Focus’ that can tell a user when they are most focused, tired, or stressed, purely based upon their typing.
Given this is already possible, the argument from some tech and security commentators is that it may only be a matter of time before AI keystroke analysis is used by cybercriminals to steal private, personal data.
Keystroke dynamics/keyboard biometrics/typing biometrics research has been going on for over 20 years, and there have been several studies into how keystrokes can be analysed to extract data.
Back in 2017, for example, a study by Princeton University showed that keystrokes, mouse movements, scrolling behaviour, and the entire contents of web pages visited may already have been tracked and recorded by hundreds of companies. The study revealed that no fewer than 480 websites of the world’s top 50,000 sites were known to have used a technique known as ‘session replay’, which, although designed to allow companies to gain an understanding of how customers use websites, also records an alarming amount of potentially dangerous information. The researchers found that companies were now tracking users individually, sometimes by name.
Back in 2019, researchers from SMU’s (Southern Methodist University) Darwin Deason Institute for Cyber-security found that the sound waves produced when typing on a computer keyboard can be picked up by a smartphone and a skilled hacker could decipher which keys were struck. That particular research project tested whether ‘always-on’ sensors in devices such as smartphones could be used to eavesdrop on people who use laptops in public places and the researchers were able to pick up what people were typing at an amazing 41 percent word accuracy.
AI and Machine Learning Used For Bad
AI and Machine Learning have already been used for illicit purposes, such as deepfake videos and faked images. For example, Social media analytics company Graphika reported identifying images of faces for social media profiles that had been faked using machine learning for the purpose of China-based anti-U.S. government campaigns. These campaigns, dubbed ‘Spamouflage Dragon’, involved the production and distribution of AI-generated photos (made using GAN) to create fake followers on Twitter and YouTube and Videos made in English, targeting US foreign policy, its handling of the coronavirus outbreak, its racial inequalities, and its moves against TikTok.
What Does This Mean For Your Business?
The rapid growth of AI and its incorporation into many systems and services across Europe has recently required new rules and regulation to keep up. Tech and security commentators have also been warning for many years about the possible uses of AI for dishonest purposes. Although this has already happened with deepfake videos, there are real fears that AI can be manipulated to spot patterns that could be used in social engineering attacks, identify any new vulnerabilities in networks, devices, and applications and, of course, analyse keystrokes to steal valuable personal information from a user. Combining keystroke recognition, cameras, AI chips in phones and other AI-enabled spying methods could, if used in the right combination, pose a threat to the data protection defences of businesses. It is important to remember, however, that AI also points the way forward for protection (e.g. its incorporation into anti-virus and other cyber-security systems).