Artificial intelligence (AI) warnings are everywhere right now.
AI is a technology for increasing productivity, processing and sorting enormous amounts of data, and delegating decision-making.
Nonetheless, these tools are available to anyone, even criminals. And we’re already seeing criminals use AI in its early stages.
Criminal conduct becomes more efficient as technology advances. It enables lawbreakers to target a larger number of people and makes them appear more credible.
Observing how criminals have adapted to and utilized technology breakthroughs in the past can provide some insight into how they may use AI in the future.
There is a better phishing hook available: ChatGPT and Google’s Bard give writing assistance, allowing unskilled writers to create effective marketing messages.
Automated conversations with victims: One of the first applications of AI systems was to automate interactions between clients and services via text, chat messaging, and phone calls.
Deepfakes: AI excels at creating mathematical models that can be “trained” on enormous amounts of real-world data, improving the models’ performance. Deepfake video and audio technology is one example of this. Metaphysic, a deep fake act, recently proved the technology’s potential by releasing a video of Simon Cowell performing opera on America’s Got Talent. This technology is out of reach for most criminals, but AI can be used to simulate how a person might respond to messages, write emails, leave voice notes, or make phone calls.
Using brute force: Another criminal strategy known as “brute forcing” could benefit from AI as well. This is where a variety of character and symbol combinations are tested in turn to check if they match your passwords.
Individuals should be proactive rather than complacent in their efforts to grasp AI. We should build our own approaches to it while remaining skeptical. We’ll have to think about how we can validate what we’re reading, hearing, or seeing.