09/20/2024 / By Ethan Huff
Move out of the way, Nigerian princes, because there is a new cash-stealing scam that is sweeping the globe: artificial intelligence (AI) chatbots that are capable of “reasoning” and “thinking” up endless ways to cheat people out of their money.
OpenAI recently showed off its new o1 ChatGPT model that the company says is much “smarter” than existing AI chatbots. The o1 model has the ability “to spend more time thinking before they respond,” the company revealed.
“They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.”
OpenAI’s o1 model of ChatGPT is the first major advancement to the system since it was first launched in late 2022. Currently, it is only available for paying ChatGPT members.
According to cybersecurity expert Dr. Andrew Bolster, the o1 ChatGPT AI model is a dream come true for cyber-criminals who are sure to dream up all kinds of scams that even the savviest internet users will be unable to detect before it bilks them out of their hard-earned cash.
“Large Language Models (LLMs) continue to improve over time, and OpenAI’s release of their ‘o1’ model is no exception to this trend,” Dr. Bolster says.
“Where this generation of LLM’s excel is in how they go about appearing to ‘reason.’ Where intermediate steps are done by the overall conversational system to draw out more creative or ‘clever’ appearing decisions and responses.”
(Related: Have you checked out our report about how demon-possessed AI systems are utilizing synthetic biotechnology to construct synthetic “superhuman” biological systems?)
A big part of what keeps the American economy running these days is crime. Pretty much everything is some kind of Ponzi scheme now, whether it be business, the general markets, religion, health care and of course, politics. And as the general public is figuring it all out, the powers that be (TPTB) are desperately trying to hatch new schemes to trick people out of their property.
AI makes this possible by allowing the planet’s worst human scum elements to create deepfake videos, for instance, that appear to show real people talking but that are just AI creations. Deepfake videos can be used to deceive people into doing or believing just about anything, which is great for business.
As many as one in three Brits, reports indicate, has already been scammed in some way by AI deepfakes. And the problem is only getting worse the more AI advances to next-level deception.
“In the context of cybersecurity, this would naturally make any conversations with these ‘reasoning machines’ more challenging for end-users to differentiate from humans,” Dr. Bolster says.
“Lending their use to romance scammers or other cybercriminals leveraging these tools to reach huge numbers of vulnerable ‘marks.'”
Nvidia, which produces the chips and other hardware that AI developers need to create these scamming abominations, just so happens to be the stock that is propping up the U.S. markets right now. Without it, and without AI, the U.S. economy would probably already be a ruinous heap of cataclysmic destruction.
Dr. Bolster says consumers should beware of anything online that seems “too good to be true” because more than likely it is a scam that these days also involves AI.
The general public “should always consult with friends and family members to get a second opinion,” he warns, “especially when someone (or something) on the end of a chat window or even a phone call is trying to pressure you into something.”
AI could spell the end of humanity and life itself on earth. Learn more about the threat at FutureTech.news.
Sources for this article include:
Tagged Under:
AI, artificial intelligence, chatbot, ChatGPT, computing, conspiracy, corruption, crime, cyber war, cyborg, Dangerous, deception, future science, future tech, Glitch, information technology, inventions, national security, o1, OpenAI, robotics, robots, scam, terrorism
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 COMPUTING NEWS