The content of this blog was sourced by leading cybersecurity and IT Company F-Secure, primarily citing F-Secure's Threat Intelligence Lead Laura Kankaala.
Artificial intelligence, or ‘AI’, isn’t new. In fact, the first recorded use of the term “artificial intelligence” came at an academic conference in 1956. But now, AI is all around us. It’s in our GPS systems, phones and laptops, smart home devices, and more. Generative artificial intelligence is a fast-emerging technology that can generate content ranging from images to music and text.
As these chatbots and image generators increase their utility through tools like ChatGPT and Midjourney, using them will become second nature - for both you and criminals.
AI can help our society move forwards and enrich our lives, enabling us to bridge language barriers, improve communication, speed up projects, and even solve complex economic problems and climate issues in the future.
With these advancements comes risks. Cybercriminals can use AI to make their scams, especially and phishing and vishing more effective. For example, AI can produce accurate voices and deep fakes which can effectively deceive people. However, it’s important to remember that AI as it exists now is not self-aware or automated.
While technology itself is powerful and new, the risks primarily come not from the technology but the people who use it. “The scams may look fancier, but - fortunately - defending against these threats is nothing new" says F-Secure's Laura Kankaala.
She advises to constantly analyze the context and what kind of engagement you have with any stranger online. This can be done by asking yourself a few questions:
The benefits of AI are clear, as Laura Kankaala explains, “AI will transform our lives - it will affect all types of industries, such as research into medicine, or even climate change. It can change our societies for the better, as long as we make sure we have safeguards in place.”