5 Shocking New Ways Hackers Can Use These AI Technologies
Although there are many industries that artificial intelligence technologies are threatening to disrupt, we can generally see how these technologies will ultimately be more beneficial than harmful. Unfortunately, these new tools provide numerous new opportunities for hackers or nefarious types.
1. Using Natural Language AI to Boost Phishing Attacks
One of the main objectives of AI research since its inception has been to develop systems that can comprehend and produce natural human language. Today, we have artificial voice production, highly developed chatbots, text generators that use natural language, and many other AI-driven technologies.
These programs are ideal for phishing attacks, in which criminals pretend to be representatives of reliable organizations in order to trick people into disclosing sensitive information. With the aid of these cutting-edge innovations, AI agents could mass-impersonate individuals via email, phone calls, instant messaging, and other computer-mediated communications.
Also: 9 Cybersecurity Tips to Keep You Safe in 2023
In contrast to the phishing we are familiar with, this would be more like turbo-charged “spear” phishing, where the attempts are made to target particular people with information about them in particular to increase the effectiveness of the scam. In a type of phishing called CEO fraud, the AI software, for instance, could pretend to be someone’s boss and request money be paid into an account.
2. Deepfaked Social Engineering
Social engineering is a type of hacking that uses flaws in human psychology and behavior to get past sophisticated technological security measures. For instance, a hacker could call the secretary of a prominent person while posing as a sanitation worker and inquire about the location of the person’s trash disposal. The criminal then travels to that location in search of discarded papers or other hints that can be combined to form exploits.
Deep learning systems (also referred to as “deepfakes”) that can imitate voices and faces have developed to the point where they can be used in real-time. You can use services where you can upload recordings of your voice to get text-to-speech that sounds exactly like you. In theory, such technology could be used to duplicate anyone’s voice. The easiest target of all would be public figures, so all you’d need to do is phone or video call someone who is impersonating whoever you want.
Also: Artificial Intelligence: Why AI Might Be the Next Big Thing in Music
One such service is Podcastle Revoice, which claims to “create a digital copy of your own voice” using voice samples you provide. Podcastle sent us a statement outlining how it responds to these issues:
The potential for deepfakes and social engineering using voice cloning is a serious one, and that’s why it’s essential that companies mitigate the possibility for abuse. Podcastle’s Revoice technology can be used to create a digital copy of your voice and as such we have clear guidelines on how voices can be created, as well as checks to prevent misuse. In order to generate a Digital Voice on our platform, a user must submit a live voice recording of 70 distinct (i.e. determined by Podcastle) sentences — meaning a user cannot simply use a pre-recording of someone else’s voice. These 70 recordings are then manually checked by our team to ensure accuracy of a single voice, and then the recordings are processed through our AI model.
3. Automated Vulnerability Discovery and Smarter Code Cracking
Humans spend countless hours searching through lines of code for flaws that can either be fixed or exploited. Now that we’ve seen that machine learning models like ChatGPT can both write code and find flaws in previously submitted code, the possibility that AI will start writing malware sooner rather than later becomes more likely.
Also: Discovering The Potential of ChatGPT: A Summary of Its Use and Importance
4. Malware that Learns and Adapts Using Machine Learning
Machine learning’s key strength is its ability to take massive amounts of data and derive insightful rules from it. It’s reasonable to assume that future malware will utilize this general idea to quickly adjust to countermeasures.
This could result in a scenario where malware and anti-malware systems effectively turn into rival machine learning systems that quickly push one another toward ever-higher levels of sophistication.
5. Using Generative AI to Create Fake Data
AI-based technologies can now appear to create text, audio, video, and images out of nothing. These technologies are now advanced enough that even professionals can no longer tell if something is fake (at least not immediately). This means that there will be a flood of fake data on the internet in the future.
For instance, fake social media profiles are currently fairly simple to identify, making it easier for informed audiences to avoid catfishing scams or straightforward bot campaigns to sow misinformation. The fake profiles created by these new AI technologies, however, might be impossible to tell apart from the real ones.
“People” with distinctive faces, generated images documenting every aspect of their made-up lives, distinctive, coherent profile information, and extensive networks of made-up friends and family members. They’re all conversing with one another like real people. These types of networks of fictitious online agents allow for the execution of numerous scams and defamation campaigns by malicious actors.
Can AI be The Solution and the Problem?
It is inevitable that some individuals will attempt to misuse any new technology for evil purposes. This new generation of AI technology stands out due to how quickly it surpasses human detection limits.
Ironically, this means that other AI technologies that fight fire with fire will be our best defense against these AI-enhanced attack vectors. It would appear that you are left with no choice but to watch them struggle and wish for the “good guys” to win. However, there are a number of things you can do to stay safe online, avoid ransomware, and recognize scams on well-known websites like Facebook, PayPal, LinkedIn, and Facebook Marketplace.